NASA Astrophysics Data System (ADS)
Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.
2014-01-01
Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.
Measurement Model Specification Error in LISREL Structural Equation Models.
ERIC Educational Resources Information Center
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
NASA Astrophysics Data System (ADS)
Pradeep, Krishna; Poiroux, Thierry; Scheer, Patrick; Juge, André; Gouget, Gilles; Ghibaudo, Gérard
2018-07-01
This work details the analysis of wafer level global process variability in 28 nm FD-SOI using split C-V measurements. The proposed approach initially evaluates the native on wafer process variability using efficient extraction methods on split C-V measurements. The on-wafer threshold voltage (VT) variability is first studied and modeled using a simple analytical model. Then, a statistical model based on the Leti-UTSOI compact model is proposed to describe the total C-V variability in different bias conditions. This statistical model is finally used to study the contribution of each process parameter to the total C-V variability.
Ordinal probability effect measures for group comparisons in multinomial cumulative link models.
Agresti, Alan; Kateri, Maria
2017-03-01
We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.
Latent Transition Analysis with a Mixture Item Response Theory Measurement Model
ERIC Educational Resources Information Center
Cho, Sun-Joo; Cohen, Allan S.; Kim, Seock-Ho; Bottge, Brian
2010-01-01
A latent transition analysis (LTA) model was described with a mixture Rasch model (MRM) as the measurement model. Unlike the LTA, which was developed with a latent class measurement model, the LTA-MRM permits within-class variability on the latent variable, making it more useful for measuring treatment effects within latent classes. A simulation…
Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.
2011-01-01
A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…
Models that predict standing crop of stream fish from habitat variables: 1950-85.
K.D. Fausch; C.L. Hawkes; M.G. Parsons
1988-01-01
We reviewed mathematical models that predict standing crop of stream fish (number or biomass per unit area or length of stream) from measurable habitat variables and classified them by the types of independent habitat variables found significant, by mathematical structure, and by model quality. Habitat variables were of three types and were measured on different scales...
NASA Astrophysics Data System (ADS)
Rachmawati, Vimala; Khusnul Arif, Didik; Adzkiya, Dieky
2018-03-01
The systems contained in the universe often have a large order. Thus, the mathematical model has many state variables that affect the computation time. In addition, generally not all variables are known, so estimations are needed to measure the magnitude of the system that cannot be measured directly. In this paper, we discuss the model reduction and estimation of state variables in the river system to measure the water level. The model reduction of a system is an approximation method of a system with a lower order without significant errors but has a dynamic behaviour that is similar to the original system. The Singular Perturbation Approximation method is one of the model reduction methods where all state variables of the equilibrium system are partitioned into fast and slow modes. Then, The Kalman filter algorithm is used to estimate state variables of stochastic dynamic systems where estimations are computed by predicting state variables based on system dynamics and measurement data. Kalman filters are used to estimate state variables in the original system and reduced system. Then, we compare the estimation results of the state and computational time between the original and reduced system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loubenets, Elena R.
We prove the existence for each Hilbert space of the two new quasi hidden variable (qHV) models, statistically noncontextual and context-invariant, reproducing all the von Neumann joint probabilities via non-negative values of real-valued measures and all the quantum product expectations—via the qHV (classical-like) average of the product of the corresponding random variables. In a context-invariant model, a quantum observable X can be represented by a variety of random variables satisfying the functional condition required in quantum foundations but each of these random variables equivalently models X under all joint von Neumann measurements, regardless of their contexts. The proved existence ofmore » this model negates the general opinion that, in terms of random variables, the Hilbert space description of all the joint von Neumann measurements for dimH≥3 can be reproduced only contextually. The existence of a statistically noncontextual qHV model, in particular, implies that every N-partite quantum state admits a local quasi hidden variable model introduced in Loubenets [J. Math. Phys. 53, 022201 (2012)]. The new results of the present paper point also to the generality of the quasi-classical probability model proposed in Loubenets [J. Phys. A: Math. Theor. 45, 185306 (2012)].« less
Johannes Breidenbach; Clara Antón-Fernández; Hans Petersson; Ronald E. McRoberts; Rasmus Astrup
2014-01-01
National Forest Inventories (NFIs) provide estimates of forest parameters for national and regional scales. Many key variables of interest, such as biomass and timber volume, cannot be measured directly in the field. Instead, models are used to predict those variables from measurements of other field variables. Therefore, the uncertainty or variability of NFI estimates...
Measuring Variability and Change with an Item Response Model for Polytomous Variables.
ERIC Educational Resources Information Center
Eid, Michael; Hoffman, Lore
1998-01-01
An extension of the graded-response model of F. Samejima (1969) is presented for the measurement of variability and change. The model is illustrated with a longitudinal study of student interest in radioactivity conducted with about 1,200 German students in elementary school when the study began. (SLD)
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Relevance of anisotropy and spatial variability of gas diffusivity for soil-gas transport
NASA Astrophysics Data System (ADS)
Schack-Kirchner, Helmer; Kühne, Anke; Lang, Friederike
2017-04-01
Models of soil gas transport generally do not consider neither direction dependence of gas diffusivity, nor its small-scale variability. However, in a recent study, we could provide evidence for anisotropy favouring vertical gas diffusion in natural soils. We hypothesize that gas transport models based on gas diffusion data measured with soil rings are strongly influenced by both, anisotropy and spatial variability and the use of averaged diffusivities could be misleading. To test this we used a 2-dimensional model of soil gas transport to under compacted wheel tracks to model the soil-air oxygen distribution in the soil. The model was parametrized with data obtained from soil-ring measurements with its central tendency and variability. The model includes vertical parameter variability as well as variation perpendicular to the elongated wheel track. Different parametrization types have been tested: [i)]Averaged values for wheel track and undisturbed. em [ii)]Random distribution of soil cells with normally distributed variability within the strata. em [iii)]Random distributed soil cells with uniformly distributed variability within the strata. All three types of small-scale variability has been tested for [j)] isotropic gas diffusivity and em [jj)]reduced horizontal gas diffusivity (constant factor), yielding in total six models. As expected the different parametrizations had an important influence to the aeration state under wheel tracks with the strongest oxygen depletion in case of uniformly distributed variability and anisotropy towards higher vertical diffusivity. The simple simulation approach clearly showed the relevance of anisotropy and spatial variability in case of identical central tendency measures of gas diffusivity. However, until now it did not consider spatial dependency of variability, that could even aggravate effects. To consider anisotropy and spatial variability in gas transport models we recommend a) to measure soil-gas transport parameters spatially explicit including different directions and b) to use random-field stochastic models to assess the possible effects for gas-exchange models.
Multiple indicators, multiple causes measurement error models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; ...
2014-06-25
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Multiple Indicators, Multiple Causes Measurement Error Models
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.; Carroll, Raymond J.
2014-01-01
Multiple Indicators, Multiple Causes Models (MIMIC) are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times however when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this paper are: (1) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model, (2) to develop likelihood based estimation methods for the MIMIC ME model, (3) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. As a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure. PMID:24962535
Multiple indicators, multiple causes measurement error models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tekwe, Carmen D.; Carter, Randy L.; Cullings, Harry M.
Multiple indicators, multiple causes (MIMIC) models are often employed by researchers studying the effects of an unobservable latent variable on a set of outcomes, when causes of the latent variable are observed. There are times, however, when the causes of the latent variable are not observed because measurements of the causal variable are contaminated by measurement error. The objectives of this study are as follows: (i) to develop a novel model by extending the classical linear MIMIC model to allow both Berkson and classical measurement errors, defining the MIMIC measurement error (MIMIC ME) model; (ii) to develop likelihood-based estimation methodsmore » for the MIMIC ME model; and (iii) to apply the newly defined MIMIC ME model to atomic bomb survivor data to study the impact of dyslipidemia and radiation dose on the physical manifestations of dyslipidemia. Finally, as a by-product of our work, we also obtain a data-driven estimate of the variance of the classical measurement error associated with an estimate of the amount of radiation dose received by atomic bomb survivors at the time of their exposure.« less
Behavioral Scale Reliability and Measurement Invariance Evaluation Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko
2004-01-01
A latent variable modeling approach to reliability and measurement invariance evaluation for multiple-component measuring instruments is outlined. An initial discussion deals with the limitations of coefficient alpha, a frequently used index of composite reliability. A widely and readily applicable structural modeling framework is next described…
Unfinished Business in Clarifying Causal Measurement: Commentary on Bainter and Bollen
ERIC Educational Resources Information Center
Markus, Keith A.
2014-01-01
In a series of articles and comments, Kenneth Bollen and his collaborators have incrementally refined an account of structural equation models that (a) model a latent variable as the effect of several observed variables and (b) carry an interpretation of the observed variables as, in some sense, measures of the latent variable that they cause.…
Empirical spatial econometric modelling of small scale neighbourhood
NASA Astrophysics Data System (ADS)
Gerkman, Linda
2012-07-01
The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.
Hoyle, R H
1991-02-01
Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.
Chen, Yun; Yang, Hui
2016-01-01
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581
Chen, Yun; Yang, Hui
2016-12-14
In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.
Causal Models with Unmeasured Variables: An Introduction to LISREL.
ERIC Educational Resources Information Center
Wolfle, Lee M.
Whenever one uses ordinary least squares regression, one is making an implicit assumption that all of the independent variables have been measured without error. Such an assumption is obviously unrealistic for most social data. One approach for estimating such regression models is to measure implied coefficients between latent variables for which…
Evaluation of solar irradiance models for climate studies
NASA Astrophysics Data System (ADS)
Ball, William; Yeo, Kok-Leng; Krivova, Natalie; Solanki, Sami; Unruh, Yvonne; Morrill, Jeff
2015-04-01
Instruments on satellites have been observing both Total Solar Irradiance (TSI) and Spectral Solar Irradiance (SSI), mainly in the ultraviolet (UV), since 1978. Models were developed to reproduce the observed variability and to compute the variability at wavelengths that were not observed or had an uncertainty too high to determine an accurate rotational or solar cycle variability. However, various models and measurements show different solar cycle SSI variability that lead to different modelled responses of ozone and temperature in the stratosphere, mainly due to the different UV variability in each model, and the global energy balance. The NRLSSI and SATIRE-S models are the most comprehensive reconstructions of solar irradiance variability for the period from 1978 to the present day. But while NRLSSI and SATIRE-S show similar solar cycle variability below 250 nm, between 250 and 400 nm SATIRE-S typically displays 50% larger variability, which is however, still significantly less then suggested by recent SORCE data. Due to large uncertainties and inconsistencies in some observational datasets, it is difficult to determine in a simple way which model is likely to be closer to the true solar variability. We review solar irradiance variability measurements and modelling and employ new analysis that sheds light on the causes of the discrepancies between the two models and with the observations.
Permutation importance: a corrected feature importance measure.
Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas
2010-05-15
In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.
Evaluation of Deep Learning Models for Predicting CO2 Flux
NASA Astrophysics Data System (ADS)
Halem, M.; Nguyen, P.; Frankel, D.
2017-12-01
Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.
Ronald E. McRoberts; Veronica C. Lessard
2001-01-01
Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...
USDA-ARS?s Scientific Manuscript database
Passive capillary lysimeters (PCLs) are uniquely suited for measuring water fluxes in variably-saturated soils. The objective of this work was to compare PCL flux measurements with simulated fluxes obtained with a calibrated unsaturated flow model. The Richards equation-based model was calibrated us...
Sample Size Limits for Estimating Upper Level Mediation Models Using Multilevel SEM
ERIC Educational Resources Information Center
Li, Xin; Beretvas, S. Natasha
2013-01-01
This simulation study investigated use of the multilevel structural equation model (MLSEM) for handling measurement error in both mediator and outcome variables ("M" and "Y") in an upper level multilevel mediation model. Mediation and outcome variable indicators were generated with measurement error. Parameter and standard…
Modeling Signal-Noise Processes Supports Student Construction of a Hierarchical Image of Sample
ERIC Educational Resources Information Center
Lehrer, Richard
2017-01-01
Grade 6 (modal age 11) students invented and revised models of the variability generated as each measured the perimeter of a table in their classroom. To construct models, students represented variability as a linear composite of true measure (signal) and multiple sources of random error. Students revised models by developing sampling…
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Observations and Models of Highly Intermittent Phytoplankton Distributions
Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu
2014-01-01
The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740
Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart
2013-11-01
This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario 1 used vehicle variables; Scenario 2, vehicle and demographic variables; Scenario 3, vehicle and morphomic variables; and Scenario 4 used all variables. AIC was used to rank the models and to address over-fitting. In each scenario, the results based on the top three models and the averages of the top 100 models were presented. The AIC and the area under the receiver operating characteristic curve (AUC) were reported in each model. The models were re-fitted after removing each variable one at a time. The increases of AIC and the decreases of AUC were then assessed to measure the contribution and importance of the individual variables in each model. The importance of the individual variables was also determined by their weighted frequencies of appearance in the top 100 selected models. Overall, the AUC was 0.58 in Scenario 1, 0.78 in Scenario 2, 0.76 in Scenario 3 and 0.82 in Scenario 4. The results showed that morphomic variables are as accurate at predicting injury risk as demographic variables. The results of this study emphasize the importance of including morphomic variables when assessing injury risk. The results also highlight the need for morphomic data in the development of human mathematical models when assessing restraint performance in frontal crashes, since morphomic variables are more "tangible" measurements compared to demographic variables such as age and gender. Copyright © 2013 Elsevier Ltd. All rights reserved.
Variability aware compact model characterization for statistical circuit design optimization
NASA Astrophysics Data System (ADS)
Qiao, Ying; Qian, Kun; Spanos, Costas J.
2012-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.
10Be measured in a GRIP snow pit and modeled using the ECHAM5-HAM general circulation model
NASA Astrophysics Data System (ADS)
Heikkilä, U.; Beer, J.; Jouzel, J.; Feichter, J.; Kubik, P.
2008-03-01
10Be measured in a Greenland Ice Core Project (GRIP) snow pit (1986-1990) with a seasonal resolution is compared with the ECHAM5-HAM GCM run. The mean modeled 10Be concentration in ice (1.0.104 atoms/g) agrees well with the measured value (1.2.104 atoms/g). The measured 10Be deposition flux (88 atoms/m2/s) also agrees well with the modeled flux (69 atoms/m2/s) and the measured precipitation rate (0.67 mm/day) agrees with the modeled rate (0.61 mm/day). The mean surface temperature of -31°C estimated from δ 18O is lower than the temperature measured at a near-by weather station (-29°C) and the modeled temperature (-26°C). During the 5-year period the concentrations and deposition fluxes, both measured and modeled, show a decreasing trend consistent with the increase in the solar activity. The variability of the measured and modeled concentrations and deposition fluxes is very similar suggesting that the variability is linked to a variability in production rather than the local meteorology.
On the measurement of stability in over-time data.
Kenny, D A; Campbell, D T
1989-06-01
In this article, autoregressive models and growth curve models are compared. Autoregressive models are useful because they allow for random change, permit scores to increase or decrease, and do not require strong assumptions about the level of measurement. Three previously presented designs for estimating stability are described: (a) time-series, (b) simplex, and (c) two-wave, one-factor methods. A two-wave, multiple-factor model also is presented, in which the variables are assumed to be caused by a set of latent variables. The factor structure does not change over time and so the synchronous relationships are temporally invariant. The factors do not cause each other and have the same stability. The parameters of the model are the factor loading structure, each variable's reliability, and the stability of the factors. We apply the model to two data sets. For eight cognitive skill variables measured at four times, the 2-year stability is estimated to be .92 and the 6-year stability is .83. For nine personality variables, the 3-year stability is .68. We speculate that for many variables there are two components: one component that changes very slowly (the trait component) and another that changes very rapidly (the state component); thus each variable is a mixture of trait and state. Circumstantial evidence supporting this view is presented.
How Well Has Global Ocean Heat Content Variability Been Measured?
NASA Astrophysics Data System (ADS)
Nelson, A.; Weiss, J.; Fox-Kemper, B.; Fabienne, G.
2016-12-01
We introduce a new strategy that uses synthetic observations of an ensemble of model simulations to test the fidelity of an observational strategy, quantifying how well it captures the statistics of variability. We apply this test to the 0-700m global ocean heat content anomaly (OHCA) as observed with in-situ measurements by the Coriolis Dataset for Reanalysis (CORA), using the Community Climate System Model (CCSM) version 3.5. One-year running mean OHCAs for the years 2005 onward are found to faithfully capture the variability. During these years, synthetic observations of the model are strongly correlated at 0.94±0.06 with the actual state of the model. Overall, sub-annual variability and data before 2005 are significantly affected by the variability of the observing system. In contrast, the sometimes-used weighted integral of observations is not a good indicator of OHCA as variability in the observing system contaminates dynamical variability.
Evaluation of Scale Reliability with Binary Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Dimitrov, Dimiter M.; Asparouhov, Tihomir
2010-01-01
A method for interval estimation of scale reliability with discrete data is outlined. The approach is applicable with multi-item instruments consisting of binary measures, and is developed within the latent variable modeling methodology. The procedure is useful for evaluation of consistency of single measures and of sum scores from item sets…
On the explaining-away phenomenon in multivariate latent variable models.
van Rijn, Peter; Rijmen, Frank
2015-02-01
Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.
Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments
NASA Astrophysics Data System (ADS)
Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.
2015-12-01
The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide process information. They fall into three basic patterns: a channelized end member, a sheet flow end member, and one intermediate case. These represent the continuum between autogenic bypass or erosion, and net deposition.
Using structural equation modeling to investigate relationships among ecological variables
Malaeb, Z.A.; Kevin, Summers J.; Pugesek, B.H.
2000-01-01
Structural equation modeling is an advanced multivariate statistical process with which a researcher can construct theoretical concepts, test their measurement reliability, hypothesize and test a theory about their relationships, take into account measurement errors, and consider both direct and indirect effects of variables on one another. Latent variables are theoretical concepts that unite phenomena under a single term, e.g., ecosystem health, environmental condition, and pollution (Bollen, 1989). Latent variables are not measured directly but can be expressed in terms of one or more directly measurable variables called indicators. For some researchers, defining, constructing, and examining the validity of latent variables may be the end task of itself. For others, testing hypothesized relationships of latent variables may be of interest. We analyzed the correlation matrix of eleven environmental variables from the U.S. Environmental Protection Agency's (USEPA) Environmental Monitoring and Assessment Program for Estuaries (EMAP-E) using methods of structural equation modeling. We hypothesized and tested a conceptual model to characterize the interdependencies between four latent variables-sediment contamination, natural variability, biodiversity, and growth potential. In particular, we were interested in measuring the direct, indirect, and total effects of sediment contamination and natural variability on biodiversity and growth potential. The model fit the data well and accounted for 81% of the variability in biodiversity and 69% of the variability in growth potential. It revealed a positive total effect of natural variability on growth potential that otherwise would have been judged negative had we not considered indirect effects. That is, natural variability had a negative direct effect on growth potential of magnitude -0.3251 and a positive indirect effect mediated through biodiversity of magnitude 0.4509, yielding a net positive total effect of 0.1258. Natural variability had a positive direct effect on biodiversity of magnitude 0.5347 and a negative indirect effect mediated through growth potential of magnitude -0.1105 yielding a positive total effects of magnitude 0.4242. Sediment contamination had a negative direct effect on biodiversity of magnitude -0.1956 and a negative indirect effect on growth potential via biodiversity of magnitude -0.067. Biodiversity had a positive effect on growth potential of magnitude 0.8432, and growth potential had a positive effect on biodiversity of magnitude 0.3398. The correlation between biodiversity and growth potential was estimated at 0.7658 and that between sediment contamination and natural variability at -0.3769.
A simple, analytical, axisymmetric microburst model for downdraft estimation
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1991-01-01
A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.
NASA Astrophysics Data System (ADS)
Collatz, G. J.; Kawa, S. R.; Liu, Y.; Zeng, F.; Ivanoff, A.
2013-12-01
We evaluate our understanding of the land biospheric carbon cycle by benchmarking a model and its variants to atmospheric CO2 observations and to an atmospheric CO2 inversion. Though the seasonal cycle in CO2 observations is well simulated by the model (RMSE/standard deviation of observations <0.5 at most sites north of 15N and <1 for Southern Hemisphere sites) different model setups suggest that the CO2 seasonal cycle provides some constraint on gross photosynthesis, respiration, and fire fluxes revealed in the amplitude and phase at northern latitude sites. CarbonTracker inversions (CT) and model show similar phasing of the seasonal fluxes but agreement in the amplitude varies by region. We also evaluate interannual variability (IAV) in the measured atmospheric CO2 which, in contrast to the seasonal cycle, is not well represented by the model. We estimate the contributions of biospheric and fire fluxes, and atmospheric transport variability to explaining observed variability in measured CO2. Comparisons with CT show that modeled IAV has some correspondence to the inversion results >40N though fluxes match poorly at regional to continental scales. Regional and global fire emissions are strongly correlated with variability observed at northern flask sample sites and in the global atmospheric CO2 growth rate though in the latter case fire emissions anomalies are not large enough to account fully for the observed variability. We discuss remaining unexplained variability in CO2 observations in terms of the representation of fluxes by the model. This work also demonstrates the limitations of the current network of CO2 observations and the potential of new denser surface measurements and space based column measurements for constraining carbon cycle processes in models.
Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H
2009-09-01
Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.
Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio
2015-08-15
In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.
Verification of models for ballistic movement time and endpoint variability.
Lin, Ray F; Drury, Colin G
2013-01-01
A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.
Quantifying measurement uncertainty and spatial variability in the context of model evaluation
NASA Astrophysics Data System (ADS)
Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.
2017-12-01
In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B
2012-04-01
Several multi-segment foot models to measure the motion of intrinsic joints of the foot have been reported. Use of these models in clinical decision making is limited due to lack of rigorous validation including inter-clinician, and inter-lab variability measures. A model with thoroughly quantified variability may significantly improve the confidence in the results of such foot models. This study proposes a new clinical foot model with the underlying strategy of using separate anatomic and technical marker configurations and coordinate systems. Anatomical landmark and coordinate system identification is determined during a static subject calibration. Technical markers are located at optimal sites for dynamic motion tracking. The model is comprised of the tibia and three foot segments (hindfoot, forefoot and hallux) and inter-segmental joint angles are computed in three planes. Data collection was carried out on pediatric subjects at two sites (Site 1: n=10 subjects by two clinicians and Site 2: five subjects by one clinician). A plaster mold method was used to quantify static intra-clinician and inter-clinician marker placement variability by allowing direct comparisons of marker data between sessions for each subject. Intra-clinician and inter-clinician joint angle variability were less than 4°. For dynamic walking kinematics, intra-clinician, inter-clinician and inter-laboratory variability were less than 6° for the ankle and forefoot, but slightly higher for the hallux. Inter-trial variability accounted for 2-4° of the total dynamic variability. Results indicate the proposed foot model reduces the effects of marker placement variability on computed foot kinematics during walking compared to similar measures in previous models. Copyright © 2011 Elsevier B.V. All rights reserved.
A Bayesian Measurment Error Model for Misaligned Radiographic Data
Lennox, Kristin P.; Glascoe, Lee G.
2013-09-06
An understanding of the inherent variability in micro-computed tomography (micro-CT) data is essential to tasks such as statistical process control and the validation of radiographic simulation tools. The data present unique challenges to variability analysis due to the relatively low resolution of radiographs, and also due to minor variations from run to run which can result in misalignment or magnification changes between repeated measurements of a sample. Positioning changes artificially inflate the variability of the data in ways that mask true physical phenomena. We present a novel Bayesian nonparametric regression model that incorporates both additive and multiplicative measurement error inmore » addition to heteroscedasticity to address this problem. We also use this model to assess the effects of sample thickness and sample position on measurement variability for an aluminum specimen. Supplementary materials for this article are available online.« less
Yu, Ping; Qian, Siyu
2018-01-01
Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables-training, self-efficacy, system quality and information quality-on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time.
Estimators for longitudinal latent exposure models: examining measurement model assumptions.
Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D
2017-06-15
Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard
2011-01-01
In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Mann, Heather M.; Rutstein, Daisy W.; Hancock, Gregory R.
2009-01-01
Multisample measured variable path analysis is used to test whether causal/structural relations among measured variables differ across populations. Several invariance testing approaches are available for assessing cross-group equality of such relations, but the associated test statistics may vary considerably across methods. This study is a…
Geiser, Christian; Keller, Brian T.; Lockhart, Ginger; Eid, Michael; Cole, David A.; Koch, Tobias
2014-01-01
Researchers analyzing longitudinal data often want to find out whether the process they study is characterized by (1) short-term state variability, (2) long-term trait change, or (3) a combination of state variability and trait change. Classical latent state-trait (LST) models are designed to measure reversible state variability around a fixed set-point or trait, whereas latent growth curve (LGC) models focus on long-lasting and often irreversible trait changes. In the present paper, we contrast LST and LGC models from the perspective of measurement invariance (MI) testing. We show that establishing a pure state-variability process requires (a) the inclusion of a mean structure and (b) establishing strong factorial invariance in LST analyses. Analytical derivations and simulations demonstrate that LST models with non-invariant parameters can mask the fact that a trait-change or hybrid process has generated the data. Furthermore, the inappropriate application of LST models to trait change or hybrid data can lead to bias in the estimates of consistency and occasion-specificity, which are typically of key interest in LST analyses. Four tips for the proper application of LST models are provided. PMID:24652650
Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan
2014-02-10
Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.
Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.
2012-01-01
The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Testing of technology readiness index model based on exploratory factor analysis approach
NASA Astrophysics Data System (ADS)
Ariani, AF; Napitupulu, D.; Jati, RK; Kadar, JA; Syafrullah, M.
2018-04-01
SMEs readiness in using ICT will determine the adoption of ICT in the future. This study aims to evaluate the model of technology readiness in order to apply the technology on SMEs. The model is tested to find if TRI model is relevant to measure ICT adoption, especially for SMEs in Indonesia. The research method used in this paper is survey to a group of SMEs in South Tangerang. The survey measures the readiness to adopt ICT based on four variables which is Optimism, Innovativeness, Discomfort, and Insecurity. Each variable contains several indicators to make sure the variable is measured thoroughly. The data collected through survey is analysed using factor analysis methodwith the help of SPSS software. The result of this study shows that TRI model gives more descendants on some indicators and variables. This result can be caused by SMEs owners’ knowledge is not homogeneous about either the technology that they are used, knowledge or the type of their business.
NASA Astrophysics Data System (ADS)
Kusrini, Elisa; Subagyo; Aini Masruroh, Nur
2016-01-01
This research is a sequel of the author's earlier conducted researches in the fields of designing of integrated performance measurement between supply chain's actors and regulator. In the previous paper, the design of performance measurement is done by combining Balanced Scorecard - Supply Chain Operation Reference - Regulator Contribution model and Data Envelopment Analysis. This model referred as B-S-Rc-DEA model. The combination has the disadvantage that all the performance variables have the same weight. This paper investigates whether by giving weight to performance variables will produce more sensitive performance measurement in detecting performance improvement. Therefore, this paper discusses the development of the model B-S-Rc-DEA by giving weight to its performance'variables. This model referred as Scale B-S-Rc-DEA model. To illustrate the model of development, some samples from small medium enterprises of leather craft industry supply chain in province of Yogyakarta, Indonesia are used in this research. It is found that Scale B-S-Rc-DEA model is more sensitive to detecting performance improvement than B-S- Rc-DEA model.
NASA Astrophysics Data System (ADS)
Collins, Curtis Andrew
Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.
Schwartz, Jennifer; Wang, Yongfei; Qin, Li; Schwamm, Lee H; Fonarow, Gregg C; Cormier, Nicole; Dorsey, Karen; McNamara, Robert L; Suter, Lisa G; Krumholz, Harlan M; Bernheim, Susannah M
2017-11-01
The Centers for Medicare & Medicaid Services publicly reports a hospital-level stroke mortality measure that lacks stroke severity risk adjustment. Our objective was to describe novel measures of stroke mortality suitable for public reporting that incorporate stroke severity into risk adjustment. We linked data from the American Heart Association/American Stroke Association Get With The Guidelines-Stroke registry with Medicare fee-for-service claims data to develop the measures. We used logistic regression for variable selection in risk model development. We developed 3 risk-standardized mortality models for patients with acute ischemic stroke, all of which include the National Institutes of Health Stroke Scale score: one that includes other risk variables derived only from claims data (claims model); one that includes other risk variables derived from claims and clinical variables that could be obtained from electronic health record data (hybrid model); and one that includes other risk variables that could be derived only from electronic health record data (electronic health record model). The cohort used to develop and validate the risk models consisted of 188 975 hospital admissions at 1511 hospitals. The claims, hybrid, and electronic health record risk models included 20, 21, and 9 risk-adjustment variables, respectively; the C statistics were 0.81, 0.82, and 0.79, respectively (as compared with the current publicly reported model C statistic of 0.75); the risk-standardized mortality rates ranged from 10.7% to 19.0%, 10.7% to 19.1%, and 10.8% to 20.3%, respectively; the median risk-standardized mortality rate was 14.5% for all measures; and the odds of mortality for a high-mortality hospital (+1 SD) were 1.51, 1.52, and 1.52 times those for a low-mortality hospital (-1 SD), respectively. We developed 3 quality measures that demonstrate better discrimination than the Centers for Medicare & Medicaid Services' existing stroke mortality measure, adjust for stroke severity, and could be implemented in a variety of settings. © 2017 American Heart Association, Inc.
Follow-Up Care for Older Women With Breast Cancer
1998-08-01
and node status (positive/negative); and breast cancer treatments received. For the breast cancer treatments variables , we used two different ...interview. Independent Variables . We constructed five different measures of comorbidity. The first was a self-reported measure of cardiopulmonary...Candidate variables for our multivariate models included: baseline measures of the relevant outcome, age, stage, comorbidity, primary tumor therapy
Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…
Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions
ERIC Educational Resources Information Center
Vuolo, Mike
2017-01-01
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
Assessment of mid-latitude atmospheric variability in CMIP5 models using a process oriented-metric
NASA Astrophysics Data System (ADS)
Di Biagio, Valeria; Calmanti, Sandro; Dell'Aquila, Alessandro; Ruti, Paolo
2013-04-01
We compare, for the period 1962-2000, an estimate of the northern hemisphere mid-latitude winter atmospheric variability according several global climate models included in the fifth phase of the Climate Model Intercomparison Project (CMIP5) with the results of the models belonging to the previous CMIP3 and with the NCEP-NCAR reanalysis. We use the space-time Hayashi spectra of the 500hPa geopotential height fields to characterize the variability of atmospheric circulation regimes and we introduce an ad hoc integral measure of the variability observed in the Northern Hemisphere on different spectral sub-domains. The overall performance of each model is evaluated by considering the total wave variability as a global scalar measure of the statistical properties of different types of atmospheric disturbances. The variability associated to eastward propagating baroclinic waves and to planetary waves is instead used to describe the performance of each model in terms of specific physical processes. We find that the two model ensembles (CMIP3 and CMIP5) do not show substantial differences in the description of northern hemisphere winter mid-latitude atmospheric variability, although some CMIP5 models display performances superior to their previous versions implemented in CMIP3. Preliminary results for the 21th century RCP 4.5 scenario will be also discussed for the CMIP5 models.
A comparative analysis of errors in long-term econometric forecasts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tepel, R.
1986-04-01
The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
Laura P. Leites; Andrew P. Robinson; Nicholas L. Crookston
2009-01-01
Diameter growth (DG) equations in many existing forest growth and yield models use tree crown ratio (CR) as a predictor variable. Where CR is not measured, it is estimated from other measured variables. We evaluated CR estimation accuracy for the models in two Forest Vegetation Simulator variants: the exponential and the logistic CR models used in the North...
van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V
2017-03-21
Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evapotranspiration measurement and modeling without fitting parameters in high-altitude grasslands
NASA Astrophysics Data System (ADS)
Ferraris, Stefano; Previati, Maurizio; Canone, Davide; Dematteis, Niccolò; Boetti, Marco; Balocco, Jacopo; Bechis, Stefano
2016-04-01
Mountain grasslands are important, also because one sixth of the world population lives inside watershed dominated by snowmelt. Also, grasslands provide food to both domestic and selvatic animals. The global warming will probably accelerate the hydrological cycle and increase the drought risk. The combination of measurements, modeling and remote sensing can furnish knowledge in such faraway areas (e.g.: Brocca et al., 2013). A better knowledge of water balance can also allow to optimize the irrigation (e.g.: Canone et al., 2015). This work is meant to build a model of water balance in mountain grasslands, ranging between 1500 and 2300 meters asl. The main input is the Digital Terrain Model, which is more reliable in grasslands than both in the woods and in the built environment. It drives the spatial variability of shortwave solar radiation. The other atmospheric forcings are more problematic to estimate, namely air temperature, wind and longwave radiation. Ad hoc routines have been written, in order to interpolate in space the meteorological hourly time variability. The soil hydraulic properties are less variable than in the plains, but the soil depth estimation is still an open issue. The soil vertical variability has been modeled taking into account the main processes: soil evaporation, root uptake, and fractured bedrock percolation. The time variability latent heat flux and soil moisture results have been compared with the data measured in an eddy covariance station. The results are very good, given the fact that the model has no fitting parameters. The space variability results have been compared with the results of a model based on Landsat 7 and 8 data, applied over an area of about 200 square kilometers. The spatial correlation is quite in agreement between the two models. Brocca et al. (2013). "Soil moisture estimation in alpine catchments through modelling and satellite observations". Vadose Zone Journal, 12(3), 10 pp. Canone et al. (2015). "Field measurements based model for surface irrigation efficiency assessment". Agric. Water Manag., 156(1) pp. 30-42
Estimations of natural variability between satellite measurements of trace species concentrations
NASA Astrophysics Data System (ADS)
Sheese, P.; Walker, K. A.; Boone, C. D.; Degenstein, D. A.; Kolonjari, F.; Plummer, D. A.; von Clarmann, T.
2017-12-01
In order to validate satellite measurements of atmospheric states, it is necessary to understand the range of random and systematic errors inherent in the measurements. On occasions where the measurements do not agree within those errors, a common "go-to" explanation is that the unexplained difference can be chalked up to "natural variability". However, the expected natural variability is often left ambiguous and rarely quantified. This study will look to quantify the expected natural variability of both O3 and NO2 between two satellite instruments: ACE-FTS (Atmospheric Chemistry Experiment - Fourier Transform Spectrometer) and OSIRIS (Optical Spectrograph and Infrared Imaging System). By sampling the CMAM30 (30-year specified dynamics simulation of the Canadian Middle Atmosphere Model) climate chemistry model throughout the upper troposphere and stratosphere at times and geolocations of coincident ACE-FTS and OSIRIS measurements at varying coincidence criteria, height-dependent expected values of O3 and NO2 variability will be estimated and reported on. The results could also be used to better optimize the coincidence criteria used in satellite measurement validation studies.
Radinger, Johannes; Wolter, Christian; Kail, Jochem
2015-01-01
Habitat suitability and the distinct mobility of species depict fundamental keys for explaining and understanding the distribution of river fishes. In recent years, comprehensive data on river hydromorphology has been mapped at spatial scales down to 100 m, potentially serving high resolution species-habitat models, e.g., for fish. However, the relative importance of specific hydromorphological and in-stream habitat variables and their spatial scales of influence is poorly understood. Applying boosted regression trees, we developed species-habitat models for 13 fish species in a sand-bed lowland river based on river morphological and in-stream habitat data. First, we calculated mean values for the predictor variables in five distance classes (from the sampling site up to 4000 m up- and downstream) to identify the spatial scale that best predicts the presence of fish species. Second, we compared the suitability of measured variables and assessment scores related to natural reference conditions. Third, we identified variables which best explained the presence of fish species. The mean model quality (AUC = 0.78, area under the receiver operating characteristic curve) significantly increased when information on the habitat conditions up- and downstream of a sampling site (maximum AUC at 2500 m distance class, +0.049) and topological variables (e.g., stream order) were included (AUC = +0.014). Both measured and assessed variables were similarly well suited to predict species’ presence. Stream order variables and measured cross section features (e.g., width, depth, velocity) were best-suited predictors. In addition, measured channel-bed characteristics (e.g., substrate types) and assessed longitudinal channel features (e.g., naturalness of river planform) were also good predictors. These findings demonstrate (i) the applicability of high resolution river morphological and instream-habitat data (measured and assessed variables) to predict fish presence, (ii) the importance of considering habitat at spatial scales larger than the sampling site, and (iii) that the importance of (river morphological) habitat characteristics differs depending on the spatial scale. PMID:26569119
Validity test and its consistency in the construction of patient loyalty model
NASA Astrophysics Data System (ADS)
Yanuar, Ferra
2016-04-01
The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.
Classification images reveal decision variables and strategies in forced choice tasks
Pritchett, Lisa M.; Murray, Richard F.
2015-01-01
Despite decades of research, there is still uncertainty about how people make simple decisions about perceptual stimuli. Most theories assume that perceptual decisions are based on decision variables, which are internal variables that encode task-relevant information. However, decision variables are usually considered to be theoretical constructs that cannot be measured directly, and this often makes it difficult to test theories of perceptual decision making. Here we show how to measure decision variables on individual trials, and we use these measurements to test theories of perceptual decision making more directly than has previously been possible. We measure classification images, which are estimates of templates that observers use to extract information from stimuli. We then calculate the dot product of these classification images with the stimuli to estimate observers' decision variables. Finally, we reconstruct each observer's “decision space,” a map that shows the probability of the observer’s responses for all values of the decision variables. We use this method to examine decision strategies in two-alternative forced choice (2AFC) tasks, for which there are several competing models. In one experiment, the resulting decision spaces support the difference model, a classic theory of 2AFC decisions. In a second experiment, we find unexpected decision spaces that are not predicted by standard models of 2AFC decisions, and that suggest intrinsic uncertainty or soft thresholding. These experiments give new evidence regarding observers’ strategies in 2AFC tasks, and they show how measuring decision variables can answer long-standing questions about perceptual decision making. PMID:26015584
Nespolo, Roberto F; Arim, Matías; Bozinovic, Francisco
2003-07-01
Body size is one of the most important determinants of energy metabolism in mammals. However, the usual physiological variables measured to characterize energy metabolism and heat dissipation in endotherms are strongly affected by thermal acclimation, and are also correlated among themselves. In addition to choosing the appropriate measurement of body size, these problems create additional complications when analyzing the relationships among physiological variables such as basal metabolism, non-shivering thermogenesis, thermoregulatory maximum metabolic rate and minimum thermal conductance, body size dependence, and the effect of thermal acclimation on them. We measured these variables in Phyllotis darwini, a murid rodent from central Chile, under conditions of warm and cold acclimation. In addition to standard statistical analyses to determine the effect of thermal acclimation on each variable and the body-mass-controlled correlation among them, we performed a Structural Equation Modeling analysis to evaluate the effects of three different measurements of body size (body mass, m(b); body length, L(b) and foot length, L(f)) on energy metabolism and thermal conductance. We found that thermal acclimation changed the correlation among physiological variables. Only cold-acclimated animals supported our a priori path models, and m(b) appeared to be the best descriptor of body size (compared with L(b) and L(f)) when dealing with energy metabolism and thermal conductance. However, while m(b) appeared to be the strongest determinant of energy metabolism, there was an important and significant contribution of L(b) (but not L(f)) to thermal conductance. This study demonstrates how additional information can be drawn from physiological ecology and general organismal studies by applying Structural Equation Modeling when multiple variables are measured in the same individuals.
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
A Multivariate Model of Parent-Adolescent Relationship Variables in Early Adolescence
ERIC Educational Resources Information Center
McKinney, Cliff; Renk, Kimberly
2011-01-01
Given the importance of predicting outcomes for early adolescents, this study examines a multivariate model of parent-adolescent relationship variables, including parenting, family environment, and conflict. Participants, who completed measures assessing these variables, included 710 culturally diverse 11-14-year-olds who were attending a middle…
Structural Equations and Path Analysis for Discrete Data.
ERIC Educational Resources Information Center
Winship, Christopher; Mare, Robert D.
1983-01-01
Presented is an approach to causal models in which some or all variables are discretely measured, showing that path analytic methods permit quantification of causal relationships among variables with the same flexibility and power of interpretation as is feasible in models including only continuous variables. Examples are provided. (Author/IS)
NASA Astrophysics Data System (ADS)
Srivastava, P. K.; Han, D.; Rico-Ramirez, M. A.; Bray, M.; Islam, T.; Petropoulos, G.; Gupta, M.
2015-12-01
Hydro-meteorological variables such as Precipitation and Reference Evapotranspiration (ETo) are the most important variables for discharge prediction. However, it is not always possible to get access to them from ground based measurements, particularly in ungauged catchments. The mesoscale model WRF (Weather Research & Forecasting model) can be used for prediction of hydro-meteorological variables. However, hydro-meteorologists would like to know how well the downscaled global data products are as compared to ground based measurements and whether it is possible to use the downscaled data for ungauged catchments. Even with gauged catchments, most of the stations have only rain and flow gauges installed. Measurements of other weather hydro-meteorological variables such as solar radiation, wind speed, air temperature, and dew point are usually missing and thus complicate the problems. In this study, for downscaling the global datasets, the WRF model is setup over the Brue catchment with three nested domains (D1, D2 and D3) of horizontal grid spacing of 81 km, 27 km and 9 km are used. The hydro-meteorological variables are downscaled using the WRF model from the National Centers for Enviromental Prediction (NCEP) reanalysis datasets and subsequently used for the ETo estimation using the Penman Monteith equation. The analysis of weather variables and precipitation are compared against the ground based datasets, which indicate that the datasets are in agreement with the observed datasets for complete monitoring period as well as during the seasons except precipitation whose performance is poorer in comparison to the measured rainfall. After a comparison, the WRF estimated precipitation and ETo are then used as a input parameter in the Probability Distributed Model (PDM) for discharge prediction. The input data and model parameter sensitivity analysis and uncertainty estimation are also taken into account for the PDM calibration and prediction following the Generalised Likelihood Uncertainty Estimation (GLUE) approach. The overall analysis suggests that the uncertainty estimates in predicted discharge using WRF downscaled ETo have comparable performance to ground based observed datasets and hence is promising for discharge prediction in the absence of ground based measurements.
Modeling and forecasting US presidential election using learning algorithms
NASA Astrophysics Data System (ADS)
Zolghadr, Mohammad; Niaki, Seyed Armin Akhavan; Niaki, S. T. A.
2017-09-01
The primary objective of this research is to obtain an accurate forecasting model for the US presidential election. To identify a reliable model, artificial neural networks (ANN) and support vector regression (SVR) models are compared based on some specified performance measures. Moreover, six independent variables such as GDP, unemployment rate, the president's approval rate, and others are considered in a stepwise regression to identify significant variables. The president's approval rate is identified as the most significant variable, based on which eight other variables are identified and considered in the model development. Preprocessing methods are applied to prepare the data for the learning algorithms. The proposed procedure significantly increases the accuracy of the model by 50%. The learning algorithms (ANN and SVR) proved to be superior to linear regression based on each method's calculated performance measures. The SVR model is identified as the most accurate model among the other models as this model successfully predicted the outcome of the election in the last three elections (2004, 2008, and 2012). The proposed approach significantly increases the accuracy of the forecast.
Specifying and Refining a Complex Measurement Model.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…
Waist circumference, body mass index, and employment outcomes.
Kinge, Jonas Minet
2017-07-01
Body mass index (BMI) is an imperfect measure of body fat. Recent studies provide evidence in favor of replacing BMI with waist circumference (WC). Hence, I investigated whether or not the association between fat mass and employment status vary by anthropometric measures. I used 15 rounds of the Health Survey for England (1998-2013), which has measures of employment status in addition to measured height, weight, and WC. WC and BMI were entered as continuous variables and obesity as binary variables defined using both WC and BMI. I used multivariate models controlling for a set of covariates. The association of WC with employment was of greater magnitude than the association between BMI and employment. I reran the analysis using conventional instrumental variables methods. The IV models showed significant impacts of obesity on employment; however, they were not more pronounced when WC was used to measure obesity, compared to BMI. This means that, in the IV models, the impact of fat mass on employment did not depend on the measure of fat mass.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
Cross, Paul C.; Klaver, Robert W.; Brennan, Angela; Creel, Scott; Beckmann, Jon P.; Higgs, Megan D.; Scurlock, Brandon M.
2013-01-01
Abstract. It is increasingly common for studies of animal ecology to use model-based predictions of environmental variables as explanatory or predictor variables, even though model prediction uncertainty is typically unknown. To demonstrate the potential for misleading inferences when model predictions with error are used in place of direct measurements, we compared snow water equivalent (SWE) and snow depth as predicted by the Snow Data Assimilation System (SNODAS) to field measurements of SWE and snow depth. We examined locations on elk (Cervus canadensis) winter ranges in western Wyoming, because modeled data such as SNODAS output are often used for inferences on elk ecology. Overall, SNODAS predictions tended to overestimate field measurements, prediction uncertainty was high, and the difference between SNODAS predictions and field measurements was greater in snow shadows for both snow variables compared to non-snow shadow areas. We used a simple simulation of snow effects on the probability of an elk being killed by a predator to show that, if SNODAS prediction uncertainty was ignored, we might have mistakenly concluded that SWE was not an important factor in where elk were killed in predatory attacks during the winter. In this simulation, we were interested in the effects of snow at finer scales (2) than the resolution of SNODAS. If bias were to decrease when SNODAS predictions are averaged over coarser scales, SNODAS would be applicable to population-level ecology studies. In our study, however, averaging predictions over moderate to broad spatial scales (9–2200 km2) did not reduce the differences between SNODAS predictions and field measurements. This study highlights the need to carefully evaluate two issues when using model output as an explanatory variable in subsequent analysis: (1) the model’s resolution relative to the scale of the ecological question of interest and (2) the implications of prediction uncertainty on inferences when using model predictions as explanatory or predictor variables.
Yu, Ping; Qian, Siyu
2018-01-01
Electronic health records (EHR) are introduced into healthcare organizations worldwide to improve patient safety, healthcare quality and efficiency. A rigorous evaluation of this technology is important to reduce potential negative effects on patient and staff, to provide decision makers with accurate information for system improvement and to ensure return on investment. Therefore, this study develops a theoretical model and questionnaire survey instrument to assess the success of organizational EHR in routine use from the viewpoint of nursing staff in residential aged care homes. The proposed research model incorporates six variables in the reformulated DeLone and McLean information systems success model: system quality, information quality, service quality, use, user satisfaction and net benefits. Two variables training and self-efficacy were also incorporated into the model. A questionnaire survey instrument was designed to measure the eight variables in the model. After a pilot test, the measurement scale was used to collect data from 243 nursing staff members in 10 residential aged care homes belonging to three management groups in Australia. Partial least squares path modeling was conducted to validate the model. The validated EHR systems success model predicts the impact of the four antecedent variables—training, self-efficacy, system quality and information quality—on the net benefits, the indicator of EHR systems success, through the intermittent variables use and user satisfaction. A 24-item measurement scale was developed to quantitatively evaluate the performance of an EHR system. The parsimonious EHR systems success model and the measurement scale can be used to benchmark EHR systems success across organizations and units and over time. PMID:29315323
Thermoviscoplastic model with application to copper
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1988-01-01
A viscoplastic model is developed which is applicable to anisothermal, cyclic, and multiaxial loading conditions. Three internal state variables are used in the model; one to account for kinematic effects, and the other two to account for isotropic effects. One of the isotropic variables is a measure of yield strength, while the other is a measure of limit strength. Each internal state variable evolves through a process of competition between strain hardening and recovery. There is no explicit coupling between dynamic and thermal recovery in any evolutionary equation, which is a useful simplification in the development of the model. The thermodynamic condition of intrinsic dissipation constrains the thermal recovery function of the model. Application of the model is made to copper, and cyclic experiments under isothermal, thermomechanical, and nonproportional loading conditions are considered. Correlations and predictions of the model are representative of observed material behavior.
ERIC Educational Resources Information Center
Crow, Wendell C.
This paper suggests ways in which manifest, physical attributes of graphic elements can be described and measured. It also proposes a preliminary conceptual model that accounts for the readily apparent, measurable variables in a visual message. The graphic elements that are described include format, typeface, and photographs/artwork. The…
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.; Tong, Bing
2016-01-01
A latent variable modeling procedure is discussed that can be used to test if two or more homogeneous multicomponent instruments with distinct components are measuring the same underlying construct. The method is widely applicable in scale construction and development research and can also be of special interest in construct validation studies.…
The Accuracy and Reproducibility of Linear Measurements Made on CBCT-derived Digital Models.
Maroua, Ahmad L; Ajaj, Mowaffak; Hajeer, Mohammad Y
2016-04-01
To evaluate the accuracy and reproducibility of linear measurements made on cone-beam computed tomography (CBCT)-derived digital models. A total of 25 patients (44% female, 18.7 ± 4 years) who had CBCT images for diagnostic purposes were included. Plaster models were obtained and digital models were extracted from CBCT scans. Seven linear measurements from predetermined landmarks were measured and analyzed on plaster models and the corresponding digital models. The measurements included arch length and width at different sites. Paired t test and Bland-Altman analysis were used to evaluate the accuracy of measurements on digital models compared to the plaster models. Also, intraclass correlation coefficients (ICCs) were used to evaluate the reproducibility of the measurements in order to assess the intraobserver reliability. The statistical analysis showed significant differences on 5 out of 14 variables, and the mean differences ranged from -0.48 to 0.51 mm. The Bland-Altman analysis revealed that the mean difference between variables was (0.14 ± 0.56) and (0.05 ± 0.96) mm and limits of agreement between the two methods ranged from -1.2 to 0.96 and from -1.8 to 1.9 mm in the maxilla and the mandible, respectively. The intraobserver reliability values were determined for all 14 variables of two types of models separately. The mean ICC value for the plaster models was 0.984 (0.924-0.999), while it was 0.946 for the CBCT models (range from 0.850 to 0.985). Linear measurements obtained from the CBCT-derived models appeared to have a high level of accuracy and reproducibility.
Causal inference with measurement error in outcomes: Bias analysis and estimation methods.
Shu, Di; Yi, Grace Y
2017-01-01
Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
Confirmatory Factor Analysis of Ordinal Variables with Misspecified Models
ERIC Educational Resources Information Center
Yang-Wallentin, Fan; Joreskog, Karl G.; Luo, Hao
2010-01-01
Ordinal variables are common in many empirical investigations in the social and behavioral sciences. Researchers often apply the maximum likelihood method to fit structural equation models to ordinal data. This assumes that the observed measures have normal distributions, which is not the case when the variables are ordinal. A better approach is…
Measuring Five Dimensions of Religiosity across Adolescence
Pearce, Lisa D.; Hayward, George M.; Pearlman, Jessica A.
2017-01-01
This paper theorizes and tests a latent variable model of adolescent religiosity in which five dimensions of religiosity are interrelated: religious beliefs, religious exclusivity, external religiosity, private practice, and religious salience. Research often theorizes overlapping and independent influences of single items or dimensions of religiosity on outcomes such as adolescent sexual behavior, but rarely operationalizes the dimensions in a measurement model accounting for their associations with each other and across time. We use longitudinal structural equation modeling (SEM) with latent variables to analyze data from two waves of the National Study of Youth and Religion. We test our hypothesized measurement model as compared to four alternate measurement models and find that our proposed model maintains superior fit. We then discuss the associations between the five dimensions of religiosity we measure and how these change over time. Our findings suggest how future research might better operationalize multiple dimensions of religiosity in studies of the influence of religion in adolescence. PMID:28931956
Monitoring D-Region Variability from Lightning Measurements
NASA Technical Reports Server (NTRS)
Simoes, Fernando; Berthelier, Jean-Jacques; Pfaff, Robert; Bilitza, Dieter; Klenzing, Jeffery
2011-01-01
In situ measurements of ionospheric D-region characteristics are somewhat scarce and rely mostly on sounding rockets. Remote sensing techniques employing Very Low Frequency (VLF) transmitters can provide electron density estimates from subionospheric wave propagation modeling. Here we discuss how lightning waveform measurements, namely sferics and tweeks, can be used for monitoring the D-region variability and day-night transition, and for local electron density estimates. A brief comparison among D-region aeronomy models is also presented.
Bayesian model for matching the radiometric measurements of aerospace and field ocean color sensors.
Salama, Mhd Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R(2) > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors.
Development of a working Hovercraft model
NASA Astrophysics Data System (ADS)
Noor, S. H. Mohamed; Syam, K.; Jaafar, A. A.; Mohamad Sharif, M. F.; Ghazali, M. R.; Ibrahim, W. I.; Atan, M. F.
2016-02-01
This paper presents the development process to fabricate a working hovercraft model. The purpose of this study is to design and investigate of a fully functional hovercraft, based on the studies that had been done. The different designs of hovercraft model had been made and tested but only one of the models is presented in this paper. In this thesis, the weight, the thrust, the lift and the drag force of the model had been measured and the electrical and mechanical parts are also presented. The processing unit of this model is Arduino Uno by using the PSP2 (Playstation 2) as the controller. Since our prototype should be functioning on all kind of earth surface, our model also had been tested in different floor condition. They include water, grass, cement and tile. The Speed of the model is measured in every case as the respond variable, Current (I) as the manipulated variable and Voltage (V) as the constant variable.
ERIC Educational Resources Information Center
McDonald, Roderick P.
2011-01-01
A distinction is proposed between measures and predictors of latent variables. The discussion addresses the consequences of the distinction for the true-score model, the linear factor model, Structural Equation Models, longitudinal and multilevel models, and item-response models. A distribution-free treatment of calibration and…
A measurement model of multiple intelligence profiles of management graduates
NASA Astrophysics Data System (ADS)
Krishnan, Heamalatha; Awang, Siti Rahmah
2017-05-01
In this study, developing a fit measurement model and identifying the best fitting items to represent Howard Gardner's nine intelligences namely, musical intelligence, bodily-kinaesthetic intelligence, mathematical/logical intelligence, visual/spatial intelligence, verbal/linguistic intelligence, interpersonal intelligence, intrapersonal intelligence, naturalist intelligence and spiritual intelligence are the main interest in order to enhance the opportunities of the management graduates for employability. In order to develop a fit measurement model, Structural Equation Modeling (SEM) was applied. A psychometric test which is the Ability Test in Employment (ATIEm) was used as the instrument to measure the existence of nine types of intelligence of 137 University Teknikal Malaysia Melaka (UTeM) management graduates for job placement purposes. The initial measurement model contains nine unobserved variables and each unobserved variable is measured by ten observed variables. Finally, the modified measurement model deemed to improve the Normed chi-square (NC) = 1.331; Incremental Fit Index (IFI) = 0.940 and Root Mean Square of Approximation (RMSEA) = 0.049 was developed. The findings showed that the UTeM management graduates possessed all nine intelligences either high or low. Musical intelligence, mathematical/logical intelligence, naturalist intelligence and spiritual intelligence contributed highest loadings on certain items. However, most of the intelligences such as bodily kinaesthetic intelligence, visual/spatial intelligence, verbal/linguistic intelligence interpersonal intelligence and intrapersonal intelligence possessed by UTeM management graduates are just at the borderline.
Generalized Processing Tree Models: Jointly Modeling Discrete and Continuous Variables.
Heck, Daniel W; Erdfelder, Edgar; Kieslich, Pascal J
2018-05-24
Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.
Quantile regression models of animal habitat relationships
Cade, Brian S.
2003-01-01
Typically, all factors that limit an organism are not measured and included in statistical models used to investigate relationships with their environment. If important unmeasured variables interact multiplicatively with the measured variables, the statistical models often will have heterogeneous response distributions with unequal variances. Quantile regression is an approach for estimating the conditional quantiles of a response variable distribution in the linear model, providing a more complete view of possible causal relationships between variables in ecological processes. Chapter 1 introduces quantile regression and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of estimates for homogeneous and heterogeneous regression models. Chapter 2 evaluates performance of quantile rankscore tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). A permutation F test maintained better Type I errors than the Chi-square T test for models with smaller n, greater number of parameters p, and more extreme quantiles τ. Both versions of the test required weighting to maintain correct Type I errors when there was heterogeneity under the alternative model. An example application related trout densities to stream channel width:depth. Chapter 3 evaluates a drop in dispersion, F-ratio like permutation test for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1). Chapter 4 simulates from a large (N = 10,000) finite population representing grid areas on a landscape to demonstrate various forms of hidden bias that might occur when the effect of a measured habitat variable on some animal was confounded with the effect of another unmeasured variable (spatially and not spatially structured). Depending on whether interactions of the measured habitat and unmeasured variable were negative (interference interactions) or positive (facilitation interactions), either upper (τ > 0.5) or lower (τ < 0.5) quantile regression parameters were less biased than mean rate parameters. Sampling (n = 20 - 300) simulations demonstrated that confidence intervals constructed by inverting rankscore tests provided valid coverage of these biased parameters. Quantile regression was used to estimate effects of physical habitat resources on a bivalve mussel (Macomona liliana) in a New Zealand harbor by modeling the spatial trend surface as a cubic polynomial of location coordinates.
NASA Astrophysics Data System (ADS)
Acero, Juan A.; Arrizabalaga, Jon
2018-01-01
Urban areas are known to modify meteorological variables producing important differences in small spatial scales (i.e. microscale). These affect human thermal comfort conditions and the dispersion of pollutants, especially those emitted inside the urban area, which finally influence quality of life and the use of public open spaces. In this study, the diurnal evolution of meteorological variables measured in four urban spaces is compared with the results provided by ENVI-met (v 4.0). Measurements were carried out during 3 days with different meteorological conditions in Bilbao in the north of the Iberian Peninsula. The evaluation of the model accuracy (i.e. the degree to which modelled values approach measured values) was carried out with several quantitative difference metrics. The results for air temperature and humidity show a good agreement of measured and modelled values independently of the regional meteorological conditions. However, in the case of mean radiant temperature and wind speed, relevant differences are encountered highlighting the limitation of the model to estimate these meteorological variables precisely during diurnal cycles, in the considered evaluation conditions (sites and weather).
NASA Astrophysics Data System (ADS)
Baptista, Isaurinda; Irvine, Brian; Fleskens, Luuk; Geissen, Violette; Ritsema, Coen
2015-04-01
Rainfall variability, the occurrence of extreme drought and historic land management practice have been recognised as contributing to serious environmental impact in Cabo Verde. Investment in conservation measures has become visible throughout the landscape. Despite this the biophysical and socioeconomic impacts of the conservation measures have been poorly assessed and documented. As such a concerted approach based on the DESIRE project continues to consult stackholders and carry out field trials for selected conservation technologies. Recent field trials have demonstrated the potential of conservation technologies but have also demonstrated that yield variability between sites and between years is significant. This variability appears to be driven by soil and rainfall characteristics However, where detailed field studies have only run for a limited period they have not as yet encountered the full range of climatic variability; thus a modelling approach is considered to capture a greater range of climatic conditions. The PESERA-DESMICE model is adopted which considers the biophysical and social economic benefits of the conservation technologies against a local baseline condition. PESERA is adopted as climate is implicitly considered in the model and, where appropriate, in-situ conservation measures are considered as an annual input to the soil. The DESMICE component of the model considers the suitability of the conservation measures and their costs and benefits in terms of environmental conditions and market access. Historic rainfall statistics are calculated from field measurements in the Ribeira Seca catchment. These statistics are used to generate a series of 50 year rainfall realisations to capture a fuller range of the climatic conditions. Each realisation provides a unique time-series of rainfall and through modelling can provide a simulated time-series of crop yield. Additional realisations and model simulations add to an envelope of the potential crop yield and cost-benefit relations. The development of such envelopes help express the agricultural risk associated with climate variability and the potential of the conservation measures to absorb the risk. Thus, highlighting the uncertainty of a given crop yield being achieved in any particular year. Such information that can directly inform or influence the adoption of conservation measures under the climatic variability of the Cabo Verde drylands.
The origin of Total Solar Irradiance variability on timescales less than a day
NASA Astrophysics Data System (ADS)
Shapiro, Alexander; Krivova, Natalie; Schmutz, Werner; Solanki, Sami K.; Leng Yeo, Kok; Cameron, Robert; Beeck, Benjamin
2016-07-01
Total Solar Irradiance (TSI) varies on timescales from minutes to decades. It is generally accepted that variability on timescales of a day and longer is dominated by solar surface magnetic fields. For shorter time scales, several additional sources of variability have been proposed, including convection and oscillation. However, available simplified and highly parameterised models could not accurately explain the observed variability in high-cadence TSI records. We employed the high-cadence solar imagery from the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory and the SATIRE (Spectral And Total Irradiance Reconstruction) model of solar irradiance variability to recreate the magnetic component of TSI variability. The recent 3D simulations of solar near-surface convection with MURAM code have been used to calculate the TSI variability caused by convection. This allowed us to determine the threshold timescale between TSI variability caused by the magnetic field and convection. Our model successfully replicates the TSI measurements by the PICARD/PREMOS radiometer which span the period of July 2010 to February 2014 at 2-minute cadence. Hence, we demonstrate that solar magnetism and convection can account for TSI variability at all timescale it has ever been measured (sans the 5-minute component from p-modes).
Short- and long-term variability of radon progeny concentration in dwellings in the Czech Republic.
Slezáková, M; Navrátilová Rovenská, K; Tomásek, L; Holecek, J
2013-03-01
In this paper, repeated measurements of radon progeny concentration in dwellings in the Czech Republic are described. Two distinct data sets are available: one based on present measurements in 170 selected dwellings in the Central Bohemian Pluton with a primary measurement carried out in the 1990s and the other based on 1920 annual measurements in 960 single-family houses in the Czech Republic in 1992 and repeatedly in 1993. The analysis of variance model with random effects is applied to data to evaluate the variability of measurements. The calculated variability attributable to repeated measurements is compared with results from other countries. In epidemiological studies, ignoring the variability of measurements may lead to biased estimates of risk of lung cancer.
NASA Astrophysics Data System (ADS)
Irmak, Suat; Mutiibwa, Denis; Payero, Jose; Marek, Thomas; Porter, Dana
2013-12-01
Canopy resistance (rc) is one of the most important variables in evapotranspiration, agronomy, hydrology and climate change studies that link vegetation response to changing environmental and climatic variables. This study investigates the concept of generalized nonlinear/linear modeling approach of rc from micrometeorological and plant variables for soybean [Glycine max (L.) Merr.] canopy at different climatic zones in Nebraska, USA (Clay Center, Geneva, Holdrege and North Platte). Eight models estimating rc as a function of different combination of micrometeorological and plant variables are presented. The models integrated the linear and non-linear effects of regulating variables (net radiation, Rn; relative humidity, RH; wind speed, U3; air temperature, Ta; vapor pressure deficit, VPD; leaf area index, LAI; aerodynamic resistance, ra; and solar zenith angle, Za) to predict hourly rc. The most complex rc model has all regulating variables and the simplest model has only Rn, Ta and RH. The rc models were developed at Clay Center in the growing season of 2007 and applied to other independent sites and years. The predicted rc for the growing seasons at four locations were then used to estimate actual crop evapotranspiration (ETc) as a one-step process using the Penman-Monteith model and compared to the measured data at all locations. The models were able to account for 66-93% of the variability in measured hourly ETc across locations. Models without LAI generally underperformed and underestimated due to overestimation of rc, especially during full canopy cover stage. Using vapor pressure deficit or relative humidity in the models had similar effect on estimating rc. The root squared error (RSE) between measured and estimated ETc was about 0.07 mm h-1 for most of the models at Clay Center, Geneva and Holdrege. At North Platte, RSE was above 0.10 mm h-1. The results at different sites and different growing seasons demonstrate the robustness and consistency of the models in estimating soybean rc, which is encouraging towards the general application of one-step estimation of soybean canopy ETc in practice using the Penman-Monteith model and could aid in enhancing the utilization of the approach by irrigation and water management community.
Spatial Variability of Snowpack Properties On Small Slopes
NASA Astrophysics Data System (ADS)
Pielmeier, C.; Kronholm, K.; Schneebeli, M.; Schweizer, J.
The spatial variability of alpine snowpacks is created by a variety of parameters like deposition, wind erosion, sublimation, melting, temperature, radiation and metamor- phism of the snow. Spatial variability is thought to strongly control the avalanche initi- ation and failure propagation processes. Local snowpack measurements are currently the basis for avalanche warning services and there exist contradicting hypotheses about the spatial continuity of avalanche active snow layers and interfaces. Very little about the spatial variability of the snowpack is known so far, therefore we have devel- oped a systematic and objective method to measure the spatial variability of snowpack properties, layering and its relation to stability. For a complete coverage, the analysis of the spatial variability has to entail all scales from mm to km. In this study the small to medium scale spatial variability is investigated, i.e. the range from centimeters to tenths of meters. During the winter 2000/2001 we took systematic measurements in lines and grids on a flat snow test field with grid distances from 5 cm to 0.5 m. Fur- thermore, we measured systematic grids with grid distances between 0.5 m and 2 m in undisturbed flat fields and on small slopes above the tree line at the Choerbschhorn, in the region of Davos, Switzerland. On 13 days we measured the spatial pattern of the snowpack stratigraphy with more than 110 snow micro penetrometer measure- ments at slopes and flat fields. Within this measuring grid we placed 1 rutschblock and 12 stuffblock tests to measure the stability of the snowpack. With the large num- ber of measurements we are able to use geostatistical methods to analyse the spatial variability of the snowpack. Typical correlation lengths are calculated from semivari- ograms. Discerning the systematic trends from random spatial variability is analysed using statistical models. Scale dependencies are shown and recurring scaling patterns are outlined. The importance of the small and medium scale spatial variability for the larger (kilometer) scale spatial variability as well as for the avalanche formation are discussed. Finally, an outlook on spatial models for the snowpack variability is given.
Exploring the use of WRF-3DVar for Estimating reference evapotranspiration in semi arid regions
NASA Astrophysics Data System (ADS)
Bray, Michaela; Liu, Jia; Abdulhamza, Ali; Bocklemann-Evans, Bettina
2013-04-01
Evapotranspiration is an important process in hydrology and is central to the analysis of water balances and water resource management. Significant water losses can occur in large drainage basins under semi arid climate conditions, moreover with the lack of measured data, the exact losses are hard to quantify. Since direct measurements for evapotranspiration are difficult to obtain it is common to estimate the process by using evapotranspiration models such as the Priestley-Taylor model, Shuttleworth -Wallace model and the FAO Penmann-Monteith. However these models depend on several atmospheric variables such as atmospheric pressure, wind speed, air temperature, net radiation and relative humidity. Some of these variables are also difficult to acquire from in-situ measurements; in addition these measurements provide local information which need to be interpolated to cover larger catchment areas over long time scales. Mesoscale Numerical Weather Prediction (NWP) modelling has become more accessible to the hydrometeorological community in recent years and is frequently used for modelling precipitation at the catchment scale. However these NWPs can also provide the atmospheric variables needed for evapotranspiration estimation at finer resolutions than can be attained from in situ measurements, offering a practical water resource tool. Moreover there is evidence that assimilation of real time observations can help improve the accuracy of mesoscale weather modelling which in turn would improve the overall evapotranspiration estimate. This study explores the effect of data assimilation in the Weather Research and Forecasting (WRF) model to derive evapotranspiration estimates for the Tigris water basin, Iraq. Two types of traditional observations, SYNOP and SOUND are assimilated by WRF-3DVAR.which contain surface and upper-level measurements of pressure, temperature, humidity and wind. The downscaled weather variables are used to determine evapostranspiration estimates and compared with observed evapostranspiration data measured by Class A evaporation pan.
USDA-ARS?s Scientific Manuscript database
Although empirical models have been developed previously, a mechanistic model is needed for estimating electrical conductivity (EC) using time domain reflectometry (TDR) with variable lengths of coaxial cable. The goals of this study are to: (1) derive a mechanistic model based on multisection tra...
Dynamic Modeling of the Main Blow in Basic Oxygen Steelmaking Using Measured Step Responses
NASA Astrophysics Data System (ADS)
Kattenbelt, Carolien; Roffel, B.
2008-10-01
In the control and optimization of basic oxygen steelmaking, it is important to have an understanding of the influence of control variables on the process. However, important process variables such as the composition of the steel and slag cannot be measured continuously. The decarburization rate and the accumulation rate of oxygen, which can be derived from the generally measured waste gas flow and composition, are an indication of changes in steel and slag composition. The influence of the control variables on the decarburization rate and the accumulation rate of oxygen can best be determined in the main blow period. In this article, the measured step responses of the decarburization rate and the accumulation rate of oxygen to step changes in the oxygen blowing rate, lance height, and the addition rate of iron ore during the main blow are presented. These measured step responses are subsequently used to develop a dynamic model for the main blow. The model consists of an iron oxide and a carbon balance and an additional equation describing the influence of the lance height and the oxygen blowing rate on the decarburization rate. With this simple dynamic model, the measured step responses can be explained satisfactorily.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia
2014-06-10
We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placingmore » them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.« less
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Mediation in dyadic data at the level of the dyads: a Structural Equation Modeling approach.
Ledermann, Thomas; Macho, Siegfried
2009-10-01
An extended version of the Common Fate Model (CFM) is presented to estimate and test mediation in dyadic data. The model can be used for distinguishable dyad members (e.g., heterosexual couples) or indistinguishable dyad members (e.g., homosexual couples) if (a) the variables measure characteristics of the dyadic relationship or shared external influences that affect both partners; if (b) the causal associations between the variables should be analyzed at the dyadic level; and if (c) the measured variables are reliable indicators of the latent variables. To assess mediation using Structural Equation Modeling, a general three-step procedure is suggested. The first is a selection of a good fitting model, the second a test of the direct effects, and the third a test of the mediating effect by means of bootstrapping. The application of the model along with the procedure for assessing mediation is illustrated using data from 184 couples on marital problems, communication, and marital quality. Differences with the Actor-Partner Interdependence Model and the analysis of longitudinal mediation by using the CFM are discussed.
Tomperi, Jani; Leiviskä, Kauko
2018-06-01
Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.
Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray
2014-01-01
The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.
Ronald E. McRoberts
2005-01-01
Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
Helle, Samuli
2018-03-01
Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.
Estimating the Effects of Pre-College Education on College Performance
2013-05-10
background information from many variables into a single measure of the expected likelihood of a person receiving treatment. This leads into a discussion of...but do not directly effect outcome variables like academic order of merit, graduation rates, or academic grades. Our model had to not only include the...both indicator variables for whether the individual’s parents ever served in any of the armed forces. High School Quality Measure is a variable
Benson, Nicholas F; Kranzler, John H; Floyd, Randy G
2016-10-01
Prior research examining cognitive ability and academic achievement relations have been based on different theoretical models, have employed both latent variables as well as observed variables, and have used a variety of analytic methods. Not surprisingly, results have been inconsistent across studies. The aims of this study were to (a) examine how relations between psychometric g, Cattell-Horn-Carroll (CHC) broad abilities, and academic achievement differ across higher-order and bifactor models; (b) examine how well various types of observed scores corresponded with latent variables; and (c) compare two types of observed scores (i.e., refined and non-refined factor scores) as predictors of academic achievement. Results suggest that cognitive-achievement relations vary across theoretical models and that both types of factor scores tend to correspond well with the models on which they are based. However, orthogonal refined factor scores (derived from a bifactor model) have the advantage of controlling for multicollinearity arising from the measurement of psychometric g across all measures of cognitive abilities. Results indicate that the refined factor scores provide more precise representations of their targeted constructs than non-refined factor scores and maintain close correspondence with the cognitive-achievement relations observed for latent variables. Thus, we argue that orthogonal refined factor scores provide more accurate representations of the relations between CHC broad abilities and achievement outcomes than non-refined scores do. Further, the use of refined factor scores addresses calls for the application of scores based on latent variable models. Copyright © 2016 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Fukumori, I.; Raghunath, R.; Fu, L. L.
1996-01-01
The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.
A Linear Variable-[theta] Model for Measuring Individual Differences in Response Precision
ERIC Educational Resources Information Center
Ferrando, Pere J.
2011-01-01
Models for measuring individual response precision have been proposed for binary and graded responses. However, more continuous formats are quite common in personality measurement and are usually analyzed with the linear factor analysis model. This study extends the general Gaussian person-fluctuation model to the continuous-response case and…
Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M
1992-12-01
The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.
Bolandzadeh, Niousha; Kording, Konrad; Salowitz, Nicole; Davis, Jennifer C; Hsu, Liang; Chan, Alison; Sharma, Devika; Blohm, Gunnar; Liu-Ambrose, Teresa
2015-01-01
Current research suggests that the neuropathology of dementia-including brain changes leading to memory impairment and cognitive decline-is evident years before the onset of this disease. Older adults with cognitive decline have reduced functional independence and quality of life, and are at greater risk for developing dementia. Therefore, identifying biomarkers that can be easily assessed within the clinical setting and predict cognitive decline is important. Early recognition of cognitive decline could promote timely implementation of preventive strategies. We included 89 community-dwelling adults aged 70 years and older in our study, and collected 32 measures of physical function, health status and cognitive function at baseline. We utilized an L1-L2 regularized regression model (elastic net) to identify which of the 32 baseline measures were strongly predictive of cognitive function after one year. We built three linear regression models: 1) based on baseline cognitive function, 2) based on variables consistently selected in every cross-validation loop, and 3) a full model based on all the 32 variables. Each of these models was carefully tested with nested cross-validation. Our model with the six variables consistently selected in every cross-validation loop had a mean squared prediction error of 7.47. This number was smaller than that of the full model (115.33) and the model with baseline cognitive function (7.98). Our model explained 47% of the variance in cognitive function after one year. We built a parsimonious model based on a selected set of six physical function and health status measures strongly predictive of cognitive function after one year. In addition to reducing the complexity of the model without changing the model significantly, our model with the top variables improved the mean prediction error and R-squared. These six physical function and health status measures can be easily implemented in a clinical setting.
NASA Astrophysics Data System (ADS)
Zhu, Ying; Fearn, Tom; MacKenzie, Gary; Clark, Ben; Dunn, Jason M.; Bigio, Irving J.; Bown, Stephen G.; Lovat, Laurence B.
2009-07-01
Elastic scattering spectroscopy (ESS) may be used to detect high-grade dysplasia (HGD) or cancer in Barrett's esophagus (BE). When spectra are measured in vivo by a hand-held optical probe, variability among replicated spectra from the same site can hinder the development of a diagnostic model for cancer risk. An experiment was carried out on excised tissue to investigate how two potential sources of this variability, pressure and angle, influence spectral variability, and the results were compared with the variations observed in spectra collected in vivo from patients with Barrett's esophagus. A statistical method called error removal by orthogonal subtraction (EROS) was applied to model and remove this measurement variability, which accounted for 96.6% of the variation in the spectra, from the in vivo data. Its removal allowed the construction of a diagnostic model with specificity improved from 67% to 82% (with sensitivity fixed at 90%). The improvement was maintained in predictions on an independent in vivo data set. EROS works well as an effective pretreatment for Barrett's in vivo data by identifying measurement variability and ameliorating its effect. The procedure reduces the complexity and increases the accuracy and interpretability of the model for classification and detection of cancer risk in Barrett's esophagus.
Bayesian Model for Matching the Radiometric Measurements of Aerospace and Field Ocean Color Sensors
Salama, Mhd. Suhyb; Su, Zhongbo
2010-01-01
A Bayesian model is developed to match aerospace ocean color observation to field measurements and derive the spatial variability of match-up sites. The performance of the model is tested against populations of synthesized spectra and full and reduced resolutions of MERIS data. The model derived the scale difference between synthesized satellite pixel and point measurements with R2 > 0.88 and relative error < 21% in the spectral range from 400 nm to 695 nm. The sub-pixel variabilities of reduced resolution MERIS image are derived with less than 12% of relative errors in heterogeneous region. The method is generic and applicable to different sensors. PMID:22163615
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Regression dilution bias: tools for correction methods and sample size calculation.
Berglund, Lars
2012-08-01
Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.
A physiologically based toxicokinetic model for lake trout (Salvelinus namaycush).
Lien, G J; McKim, J M; Hoffman, A D; Jenson, C T
2001-01-01
A physiologically based toxicokinetic (PB-TK) model for fish, incorporating chemical exchange at the gill and accumulation in five tissue compartments, was parameterized and evaluated for lake trout (Salvelinus namaycush). Individual-based model parameterization was used to examine the effect of natural variability in physiological, morphological, and physico-chemical parameters on model predictions. The PB-TK model was used to predict uptake of organic chemicals across the gill and accumulation in blood and tissues in lake trout. To evaluate the accuracy of the model, a total of 13 adult lake trout were exposed to waterborne 1,1,2,2-tetrachloroethane (TCE), pentachloroethane (PCE), and hexachloroethane (HCE), concurrently, for periods of 6, 12, 24 or 48 h. The measured and predicted concentrations of TCE, PCE and HCE in expired water, dorsal aortic blood and tissues were generally within a factor of two, and in most instances much closer. Variability noted in model predictions, based on the individual-based model parameterization used in this study, reproduced variability observed in measured concentrations. The inference is made that parameters influencing variability in measured blood and tissue concentrations of xenobiotics are included and accurately represented in the model. This model contributes to a better understanding of the fundamental processes that regulate the uptake and disposition of xenobiotic chemicals in the lake trout. This information is crucial to developing a better understanding of the dynamic relationships between contaminant exposure and hazard to the lake trout.
Gruginskie, Lúcia Adriana Dos Santos; Vaccaro, Guilherme Luís Roehe
2018-01-01
The quality of the judicial system of a country can be verified by the overall length time of lawsuits, or the lead time. When the lead time is excessive, a country's economy can be affected, leading to the adoption of measures such as the creation of the Saturn Center in Europe. Although there are performance indicators to measure the lead time of lawsuits, the analysis and the fit of prediction models are still underdeveloped themes in the literature. To contribute to this subject, this article compares different prediction models according to their accuracy, sensitivity, specificity, precision, and F1 measure. The database used was from TRF4-the Tribunal Regional Federal da 4a Região-a federal court in southern Brazil, corresponding to the 2nd Instance civil lawsuits completed in 2016. The models were fitted using support vector machine, naive Bayes, random forests, and neural network approaches with categorical predictor variables. The lead time of the 2nd Instance judgment was selected as the response variable measured in days and categorized in bands. The comparison among the models showed that the support vector machine and random forest approaches produced measurements that were superior to those of the other models. The evaluation of the models was made using k-fold cross-validation similar to that applied to the test models.
Manifest Variable Granger Causality Models for Developmental Research: A Taxonomy
ERIC Educational Resources Information Center
von Eye, Alexander; Wiedermann, Wolfgang
2015-01-01
Granger models are popular when it comes to testing hypotheses that relate series of measures causally to each other. In this article, we propose a taxonomy of Granger causality models. The taxonomy results from crossing the four variables Order of Lag, Type of (Contemporaneous) Effect, Direction of Effect, and Segment of Dependent Series…
Mixture Distribution Latent State-Trait Analysis: Basic Ideas and Applications
ERIC Educational Resources Information Center
Courvoisier, Delphine S.; Eid, Michael; Nussbeck, Fridtjof W.
2007-01-01
Extensions of latent state-trait models for continuous observed variables to mixture latent state-trait models with and without covariates of change are presented that can separate individuals differing in their occasion-specific variability. An empirical application to the repeated measurement of mood states (N = 501) revealed that a model with 2…
Implications of complete watershed soil moisture measurements to hydrologic modeling
NASA Technical Reports Server (NTRS)
Engman, E. T.; Jackson, T. J.; Schmugge, T. J.
1983-01-01
A series of six microwave data collection flights for measuring soil moisture were made over a small 7.8 square kilometer watershed in southwestern Minnesota. These flights were made to provide 100 percent coverage of the basin at a 400 m resolution. In addition, three flight lines were flown at preselected areas to provide a sample of data at a higher resolution of 60 m. The low level flights provide considerably more information on soil moisture variability. The results are discussed in terms of reproducibility, spatial variability and temporal variability, and their implications for hydrologic modeling.
Recknagel, Friedrich; Orr, Philip T; Cao, Hongqing
2014-01-01
Seven-day-ahead forecasting models of Cylindrospermopsis raciborskii in three warm-monomictic and mesotrophic reservoirs in south-east Queensland have been developed by means of water quality data from 1999 to 2010 and the hybrid evolutionary algorithm HEA. Resulting models using all measured variables as inputs as well as models using electronically measurable variables only as inputs forecasted accurately timing of overgrowth of C. raciborskii and matched well high and low magnitudes of observed bloom events with 0.45≤r 2 >0.61 and 0.4≤r 2 >0.57, respectively. The models also revealed relationships and thresholds triggering bloom events that provide valuable information on synergism between water quality conditions and population dynamics of C. raciborskii. Best performing models based on using all measured variables as inputs indicated electrical conductivity (EC) within the range of 206-280mSm -1 as threshold above which fast growth and high abundances of C. raciborskii have been observed for the three lakes. Best models based on electronically measurable variables for the Lakes Wivenhoe and Somerset indicated a water temperature (WT) range of 25.5-32.7°C within which fast growth and high abundances of C. raciborskii can be expected. By contrast the model for Lake Samsonvale highlighted a turbidity (TURB) level of 4.8 NTU as indicator for mass developments of C. raciborskii. Experiments with online measured water quality data of the Lake Wivenhoe from 2007 to 2010 resulted in predictive models with 0.61≤r 2 >0.65 whereby again similar levels of EC and WT have been discovered as thresholds for outgrowth of C. raciborskii. The highest validity of r 2 =0.75 for an in situ data-based model has been achieved after considering time lags for EC by 7 days and dissolved oxygen by 1 day. These time lags have been discovered by a systematic screening of all possible combinations of time lags between 0 and 10 days for all electronically measurable variables. The so-developed model performs seven-day-ahead forecasts and is currently implemented and tested for early warning of C. raciborskii blooms in the Wivenhoe reservoir. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Dons, Evi; Van Poppel, Martine; Kochan, Bruno; Wets, Geert; Int Panis, Luc
2013-08-01
Land use regression (LUR) modeling is a statistical technique used to determine exposure to air pollutants in epidemiological studies. Time-activity diaries can be combined with LUR models, enabling detailed exposure estimation and limiting exposure misclassification, both in shorter and longer time lags. In this study, the traffic related air pollutant black carbon was measured with μ-aethalometers on a 5-min time base at 63 locations in Flanders, Belgium. The measurements show that hourly concentrations vary between different locations, but also over the day. Furthermore the diurnal pattern is different for street and background locations. This suggests that annual LUR models are not sufficient to capture all the variation. Hourly LUR models for black carbon are developed using different strategies: by means of dummy variables, with dynamic dependent variables and/or with dynamic and static independent variables. The LUR model with 48 dummies (weekday hours and weekend hours) performs not as good as the annual model (explained variance of 0.44 compared to 0.77 in the annual model). The dataset with hourly concentrations of black carbon can be used to recalibrate the annual model, resulting in many of the original explaining variables losing their statistical significance, and certain variables having the wrong direction of effect. Building new independent hourly models, with static or dynamic covariates, is proposed as the best solution to solve these issues. R2 values for hourly LUR models are mostly smaller than the R2 of the annual model, ranging from 0.07 to 0.8. Between 6 a.m. and 10 p.m. on weekdays the R2 approximates the annual model R2. Even though models of consecutive hours are developed independently, similar variables turn out to be significant. Using dynamic covariates instead of static covariates, i.e. hourly traffic intensities and hourly population densities, did not significantly improve the models' performance.
Clustering coefficients of protein-protein interaction networks
NASA Astrophysics Data System (ADS)
Miller, Gerald A.; Shi, Yi Y.; Qian, Hong; Bomsztyk, Karol
2007-05-01
The properties of certain networks are determined by hidden variables that are not explicitly measured. The conditional probability (propagator) that a vertex with a given value of the hidden variable is connected to k other vertices determines all measurable properties. We study hidden variable models and find an averaging approximation that enables us to obtain a general analytical result for the propagator. Analytic results showing the validity of the approximation are obtained. We apply hidden variable models to protein-protein interaction networks (PINs) in which the hidden variable is the association free energy, determined by distributions that depend on biochemistry and evolution. We compute degree distributions as well as clustering coefficients of several PINs of different species; good agreement with measured data is obtained. For the human interactome two different parameter sets give the same degree distributions, but the computed clustering coefficients differ by a factor of about 2. This shows that degree distributions are not sufficient to determine the properties of PINs.
ERIC Educational Resources Information Center
Seo, Hyojeong; Little, Todd D.; Shogren, Karrie A.; Lang, Kyle M.
2016-01-01
Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used…
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
Violation of Bell's Inequality Using Continuous Variable Measurements
NASA Astrophysics Data System (ADS)
Thearle, Oliver; Janousek, Jiri; Armstrong, Seiji; Hosseini, Sara; Schünemann Mraz, Melanie; Assad, Syed; Symul, Thomas; James, Matthew R.; Huntington, Elanor; Ralph, Timothy C.; Lam, Ping Koy
2018-01-01
A Bell inequality is a fundamental test to rule out local hidden variable model descriptions of correlations between two physically separated systems. There have been a number of experiments in which a Bell inequality has been violated using discrete-variable systems. We demonstrate a violation of Bell's inequality using continuous variable quadrature measurements. By creating a four-mode entangled state with homodyne detection, we recorded a clear violation with a Bell value of B =2.31 ±0.02 . This opens new possibilities for using continuous variable states for device independent quantum protocols.
A model for AGN variability on multiple time-scales
NASA Astrophysics Data System (ADS)
Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.
2018-05-01
We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.
Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air
NASA Technical Reports Server (NTRS)
Martos, Borja; Morelli, Eugene A.
2012-01-01
The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.
Kafle, Gopi Krishna; Chen, Lide
2016-02-01
There is a lack of literature reporting the methane potential of several livestock manures under the same anaerobic digestion conditions (same inoculum, temperature, time, and size of the digester). To the best of our knowledge, no previous study has reported biochemical methane potential (BMP) predicting models developed and evaluated by solely using at least five different livestock manure tests results. The goal of this study was to evaluate the BMP of five different livestock manures (dairy manure (DM), horse manure (HM), goat manure (GM), chicken manure (CM) and swine manure (SM)) and to predict the BMP using different statistical models. Nutrients of the digested different manures were also monitored. The BMP tests were conducted under mesophilic temperatures with a manure loading factor of 3.5g volatile solids (VS)/L and a feed to inoculum ratio (F/I) of 0.5. Single variable and multiple variable regression models were developed using manure total carbohydrate (TC), crude protein (CP), total fat (TF), lignin (LIG) and acid detergent fiber (ADF), and measured BMP data. Three different kinetic models (first order kinetic model, modified Gompertz model and Chen and Hashimoto model) were evaluated for BMP predictions. The BMPs of DM, HM, GM, CM and SM were measured to be 204, 155, 159, 259, and 323mL/g VS, respectively and the VS removals were calculated to be 58.6%, 52.9%, 46.4%, 81.4%, 81.4%, respectively. The technical digestion time (T80-90, time required to produce 80-90% of total biogas production) for DM, HM, GM, CM and SM was calculated to be in the ranges of 19-28, 27-37, 31-44, 13-18, 12-17days, respectively. The effluents from the HM showed the lowest nitrogen, phosphorus and potassium concentrations. The effluents from the CM digesters showed highest nitrogen and phosphorus concentrations and digested SM showed highest potassium concentration. Based on the results of the regression analysis, the model using the variable of LIG showed the best (R(2)=0.851, p=0.026) for BMP prediction among the single variable models, and the model including variables of TC and TF showed the best prediction for BMPs (R(2)=0.913, p=0.068-0.075) comparing with other two-variable models, while the model including variables of CP, LIG and ADF performed the best in BMP prediction (R(2)=0.999, p=0.009-0.017) if three-variable models were compared. Among the three kinetic models used, the first order kinetic model fitted the measured BMPs data best (R(2)=0.996-0.998, rRMSE=0.171-0.381) and deviations between measured and the first order kinetic model predicted BMPs were less than 3.0%. Published by Elsevier Ltd.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
Batterham, Philip J; Bunce, David; Mackinnon, Andrew J; Christensen, Helen
2014-01-01
very few studies have examined the association between intra-individual reaction time variability and subsequent mortality. Furthermore, the ability of simple measures of variability to predict mortality has not been compared with more complex measures. a prospective cohort study of 896 community-based Australian adults aged 70+ were interviewed up to four times from 1990 to 2002, with vital status assessed until June 2007. From this cohort, 770-790 participants were included in Cox proportional hazards regression models of survival. Vital status and time in study were used to conduct survival analyses. The mean reaction time and three measures of intra-individual reaction time variability were calculated separately across 20 trials of simple and choice reaction time tasks. Models were adjusted for a range of demographic, physical health and mental health measures. greater intra-individual simple reaction time variability, as assessed by the raw standard deviation (raw SD), coefficient of variation (CV) or the intra-individual standard deviation (ISD), was strongly associated with an increased hazard of all-cause mortality in adjusted Cox regression models. The mean reaction time had no significant association with mortality. intra-individual variability in simple reaction time appears to have a robust association with mortality over 17 years. Health professionals such as neuropsychologists may benefit in their detection of neuropathology by supplementing neuropsychiatric testing with the straightforward process of testing simple reaction time and calculating raw SD or CV.
The use of auxiliary variables in capture-recapture and removal experiments
Pollock, K.H.; Hines, J.E.; Nichols, J.D.
1984-01-01
The dependence of animal capture probabilities on auxiliary variables is an important practical problem which has not been considered in the development of estimation procedures for capture-recapture and removal experiments. In this paper the linear logistic binary regression model is used to relate the probability of capture to continuous auxiliary variables. The auxiliary variables could be environmental quantities such as air or water temperature, or characteristics of individual animals, such as body length or weight. Maximum likelihood estimators of the population parameters are considered for a variety of models which all assume a closed population. Testing between models is also considered. The models can also be used when one auxiliary variable is a measure of the effort expended in obtaining the sample.
Response of winter and spring wheat grain yields to meteorological variation
NASA Technical Reports Server (NTRS)
Feyerherm, A. M.; Kanemasu, E. T.; Paulsen, G. M.
1977-01-01
Mathematical models which quantify the relation of wheat yield to selected weather-related variables are presented. Other sources of variation (amount of applied nitrogen, improved varieties, cultural practices) have been incorporated in the models to explain yield variation both singly and in combination with weather-related variables. Separate models were developed for fall-planted (winter) and spring-planted (spring) wheats. Meteorological variation is observed, basically, by daily measurements of minimum and maximum temperatures, precipitation, and tabled values of solar radiation at the edge of the atmosphere and daylength. Two different soil moisture budgets are suggested to compute simulated values of evapotranspiration; one uses the above-mentioned inputs, the other uses the measured temperatures and precipitation but replaces the tabled values (solar radiation and daylength) by measured solar radiation and satellite-derived multispectral scanner data to estimate leaf area index. Weather-related variables are defined by phenological stages, rather than calendar periods, to make the models more universally applicable.
Aligning physical elements with persons' attitude: an approach using Rasch measurement theory
NASA Astrophysics Data System (ADS)
Camargo, F. R.; Henson, B.
2013-09-01
Affective engineering uses mathematical models to convert the information obtained from persons' attitude to physical elements into an ergonomic design. However, applications in the domain have not in many cases met measurement assumptions. This paper proposes a novel approach based on Rasch measurement theory to overcome the problem. The research demonstrates that if data fit the model, further variables can be added to a scale. An empirical study was designed to determine the range of compliance where consumers could obtain an impression of a moisturizer cream when touching some product containers. Persons, variables and stimulus objects were parameterised independently on a linear continuum. The results showed that a calibrated scale preserves comparability although incorporating further variables.
ERIC Educational Resources Information Center
Kaya, Yasemin; Leite, Walter L.
2017-01-01
Cognitive diagnosis models are diagnostic models used to classify respondents into homogenous groups based on multiple categorical latent variables representing the measured cognitive attributes. This study aims to present longitudinal models for cognitive diagnosis modeling, which can be applied to repeated measurements in order to monitor…
Modified Regression Correlation Coefficient for Poisson Regression Model
NASA Astrophysics Data System (ADS)
Kaengthong, Nattacha; Domthong, Uthumporn
2017-09-01
This study gives attention to indicators in predictive power of the Generalized Linear Model (GLM) which are widely used; however, often having some restrictions. We are interested in regression correlation coefficient for a Poisson regression model. This is a measure of predictive power, and defined by the relationship between the dependent variable (Y) and the expected value of the dependent variable given the independent variables [E(Y|X)] for the Poisson regression model. The dependent variable is distributed as Poisson. The purpose of this research was modifying regression correlation coefficient for Poisson regression model. We also compare the proposed modified regression correlation coefficient with the traditional regression correlation coefficient in the case of two or more independent variables, and having multicollinearity in independent variables. The result shows that the proposed regression correlation coefficient is better than the traditional regression correlation coefficient based on Bias and the Root Mean Square Error (RMSE).
Waite, Ian R.
2014-01-01
As part of the USGS study of nutrient enrichment of streams in agricultural regions throughout the United States, about 30 sites within each of eight study areas were selected to capture a gradient of nutrient conditions. The objective was to develop watershed disturbance predictive models for macroinvertebrate and algal metrics at national and three regional landscape scales to obtain a better understanding of important explanatory variables. Explanatory variables in models were generated from landscape data, habitat, and chemistry. Instream nutrient concentration and variables assessing the amount of disturbance to the riparian zone (e.g., percent row crops or percent agriculture) were selected as most important explanatory variable in almost all boosted regression tree models regardless of landscape scale or assemblage. Frequently, TN and TP concentration and riparian agricultural land use variables showed a threshold type response at relatively low values to biotic metrics modeled. Some measure of habitat condition was also commonly selected in the final invertebrate models, though the variable(s) varied across regions. Results suggest national models tended to account for more general landscape/climate differences, while regional models incorporated both broad landscape scale and more specific local-scale variables.
Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.
Fu, Michael J; Cavuşoğlu, M Cenk
2012-12-01
Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.
Incorporating imperfect detection into joint models of communites: A response to Warton et al.
Beissinger, Steven R.; Iknayan, Kelly J.; Guillera-Arroita, Gurutzeta; Zipkin, Elise; Dorazio, Robert; Royle, Andy; Kery, Marc
2016-01-01
Warton et al. [1] advance community ecology by describing a statistical framework that can jointly model abundances (or distributions) across many taxa to quantify how community properties respond to environmental variables. This framework specifies the effects of both measured and unmeasured (latent) variables on the abundance (or occurrence) of each species. Latent variables are random effects that capture the effects of both missing environmental predictors and correlations in parameter values among different species. As presented in Warton et al., however, the joint modeling framework fails to account for the common problem of detection or measurement errors that always accompany field sampling of abundance or occupancy, and are well known to obscure species- and community-level inferences.
Variable selection in discrete survival models including heterogeneity.
Groll, Andreas; Tutz, Gerhard
2017-04-01
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.
The need to consider temporal variability when modelling exchange at the sediment-water interface
Rosenberry, Donald O.
2011-01-01
Most conceptual or numerical models of flows and processes at the sediment-water interface assume steady-state conditions and do not consider temporal variability. The steady-state assumption is required because temporal variability, if quantified at all, is usually determined on a seasonal or inter-annual scale. In order to design models that can incorporate finer-scale temporal resolution we first need to measure variability at a finer scale. Automated seepage meters that can measure flow across the sediment-water interface with temporal resolution of seconds to minutes were used in a variety of settings to characterize seepage response to rainfall, wind, and evapotranspiration. Results indicate that instantaneous seepage fluxes can be much larger than values commonly reported in the literature, although seepage does not always respond to hydrological processes. Additional study is needed to understand the reasons for the wide range and types of responses to these hydrologic and atmospheric events.
Higher-Order Factors of Personality: Do They Exist?
Ashton, Michael C.; Lee, Kibeom; Goldberg, Lewis R.; de Vries, Reinout E.
2010-01-01
Scales that measure the Big Five personality factors are often substantially intercorrelated. These correlations are sometimes interpreted as implying the existence of two higher-order factors of personality. We show that correlations between measures of broad personality factors do not necessarily imply the existence of higher-order factors, and might instead be due to variables that represent same-signed blends of orthogonal factors. Therefore, the hypotheses of higher-order factors and blended variables can only be tested with data on lower-level personality variables that define the personality factors. We compared the higher-order factor model and the blended variable model in three participant samples using the Big Five Aspect Scales, and found better fit for the latter model. In other analyses using the HEXACO Personality Inventory, we identified mutually uncorrelated markers of six personality factors. We conclude that correlations between personality factor scales can be explained without postulating any higher-order dimensions of personality. PMID:19458345
Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan
2015-01-01
Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.
Measuring individual differences in responses to date-rape vignettes using latent variable models.
Tuliao, Antover P; Hoffman, Lesa; McChargue, Dennis E
2017-01-01
Vignette methodology can be a flexible and powerful way to examine individual differences in response to dangerous real-life scenarios. However, most studies underutilize the usefulness of such methodology by analyzing only one outcome, which limits the ability to track event-related changes (e.g., vacillation in risk perception). The current study was designed to illustrate the dynamic influence of risk perception on exit point from a date-rape vignette. Our primary goal was to provide an illustrative example of how to use latent variable models for vignette methodology, including latent growth curve modeling with piecewise slopes, as well as latent variable measurement models. Through the combination of a step-by-step exposition in this text and corresponding model syntax available electronically, we detail an alternative statistical "blueprint" to enhance future violence research efforts using vignette methodology. Aggr. Behav. 43:60-73, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
W. Henry McNab; F. Thomas Lloyd
2001-01-01
The value of environmental variables as measures of site quality for individual tree growth models was determined for 12 common species of eastern hardwoods in the Southern Appalachian Mountains. Periodic diameter increment was modeled as a function of size, competition and environmental variables for 1,381 trees in even-aged stands of mixed-species. Resulting species...
A provisional effective evaluation when errors are present in independent variables
NASA Technical Reports Server (NTRS)
Gurin, L. S.
1983-01-01
Algorithms are examined for evaluating the parameters of a regression model when there are errors in the independent variables. The algorithms are fast and the estimates they yield are stable with respect to the correlation of errors and measurements of both the dependent variable and the independent variables.
Measurement of Psychological Disorders Using Cognitive Diagnosis Models
ERIC Educational Resources Information Center
Templin, Jonathan L.; Henson, Robert A.
2006-01-01
Cognitive diagnosis models are constrained (multiple classification) latent class models that characterize the relationship of questionnaire responses to a set of dichotomous latent variables. Having emanated from educational measurement, several aspects of such models seem well suited to use in psychological assessment and diagnosis. This article…
Brandstätter, Christian; Laner, David; Prantl, Roman; Fellner, Johann
2014-12-01
Municipal solid waste landfills pose a threat on environment and human health, especially old landfills which lack facilities for collection and treatment of landfill gas and leachate. Consequently, missing information about emission flows prevent site-specific environmental risk assessments. To overcome this gap, the combination of waste sampling and analysis with statistical modeling is one option for estimating present and future emission potentials. Optimizing the tradeoff between investigation costs and reliable results requires knowledge about both: the number of samples to be taken and variables to be analyzed. This article aims to identify the optimized number of waste samples and variables in order to predict a larger set of variables. Therefore, we introduce a multivariate linear regression model and tested the applicability by usage of two case studies. Landfill A was used to set up and calibrate the model based on 50 waste samples and twelve variables. The calibrated model was applied to Landfill B including 36 waste samples and twelve variables with four predictor variables. The case study results are twofold: first, the reliable and accurate prediction of the twelve variables can be achieved with the knowledge of four predictor variables (Loi, EC, pH and Cl). For the second Landfill B, only ten full measurements would be needed for a reliable prediction of most response variables. The four predictor variables would exhibit comparably low analytical costs in comparison to the full set of measurements. This cost reduction could be used to increase the number of samples yielding an improved understanding of the spatial waste heterogeneity in landfills. Concluding, the future application of the developed model potentially improves the reliability of predicted emission potentials. The model could become a standard screening tool for old landfills if its applicability and reliability would be tested in additional case studies. Copyright © 2014 Elsevier Ltd. All rights reserved.
A geospatial model of ambient sound pressure levels in the contiguous United States.
Mennitt, Daniel; Sherrill, Kirk; Fristrup, Kurt
2014-05-01
This paper presents a model that predicts measured sound pressure levels using geospatial features such as topography, climate, hydrology, and anthropogenic activity. The model utilizes random forest, a tree-based machine learning algorithm, which does not incorporate a priori knowledge of source characteristics or propagation mechanics. The response data encompasses 270 000 h of acoustical measurements from 190 sites located in National Parks across the contiguous United States. The explanatory variables were derived from national geospatial data layers and cross validation procedures were used to evaluate model performance and identify variables with predictive power. Using the model, the effects of individual explanatory variables on sound pressure level were isolated and quantified to reveal systematic trends across environmental gradients. Model performance varies by the acoustical metric of interest; the seasonal L50 can be predicted with a median absolute deviation of approximately 3 dB. The primary application for this model is to generalize point measurements to maps expressing spatial variation in ambient sound levels. An example of this mapping capability is presented for Zion National Park and Cedar Breaks National Monument in southwestern Utah.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
NASA Astrophysics Data System (ADS)
Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim
2017-04-01
Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.
Reconstructing Mammalian Sleep Dynamics with Data Assimilation
Sedigh-Sarvestani, Madineh; Schiff, Steven J.; Gluckman, Bruce J.
2012-01-01
Data assimilation is a valuable tool in the study of any complex system, where measurements are incomplete, uncertain, or both. It enables the user to take advantage of all available information including experimental measurements and short-term model forecasts of a system. Although data assimilation has been used to study other biological systems, the study of the sleep-wake regulatory network has yet to benefit from this toolset. We present a data assimilation framework based on the unscented Kalman filter (UKF) for combining sparse measurements together with a relatively high-dimensional nonlinear computational model to estimate the state of a model of the sleep-wake regulatory system. We demonstrate with simulation studies that a few noisy variables can be used to accurately reconstruct the remaining hidden variables. We introduce a metric for ranking relative partial observability of computational models, within the UKF framework, that allows us to choose the optimal variables for measurement and also provides a methodology for optimizing framework parameters such as UKF covariance inflation. In addition, we demonstrate a parameter estimation method that allows us to track non-stationary model parameters and accommodate slow dynamics not included in the UKF filter model. Finally, we show that we can even use observed discretized sleep-state, which is not one of the model variables, to reconstruct model state and estimate unknown parameters. Sleep is implicated in many neurological disorders from epilepsy to schizophrenia, but simultaneous observation of the many brain components that regulate this behavior is difficult. We anticipate that this data assimilation framework will enable better understanding of the detailed interactions governing sleep and wake behavior and provide for better, more targeted, therapies. PMID:23209396
Characterization of the spatial variability of channel morphology
Moody, J.A.; Troutman, B.M.
2002-01-01
The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.
Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E
2011-06-22
Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.
ERIC Educational Resources Information Center
Seo, Hyojeong; Little, Todd D.; Shogren, Karrie A.; Lang, Kyle M.
2016-01-01
Structural equation modeling (SEM) is a powerful and flexible analytic tool to model latent constructs and their relations with observed variables and other constructs. SEM applications offer advantages over classical models in dealing with statistical assumptions and in adjusting for measurement error. So far, however, SEM has not been fully used…
Barycentric parameterizations for isotropic BRDFs.
Stark, Michael M; Arvo, James; Smits, Brian
2005-01-01
A bidirectional reflectance distribution function (BRDF) is often expressed as a function of four real variables: two spherical coordinates in each of the the "incoming" and "outgoing" directions. However, many BRDFs reduce to functions of fewer variables. For example, isotropic reflection can be represented by a function of three variables. Some BRDF models can be reduced further. In this paper, we introduce new sets of coordinates which we use to reduce the dimensionality of several well-known analytic BRDFs as well as empirically measured BRDF data. The proposed coordinate systems are barycentric with respect to a triangular support with a direct physical interpretation. One coordinate set is based on the BRDF model proposed by Lafortune. Another set, based on a model of Ward, is associated with the "halfway" vector common in analytical BRDF formulas. Through these coordinate sets we establish lower bounds on the approximation error inherent in the models on which they are based. We present a third set of coordinates, not based on any analytical model, that performs well in approximating measured data. Finally, our proposed variables suggest novel ways of constructing and visualizing BRDFs.
UV solar irradiance in observations and the NRLSSI and SATIRE-S models
NASA Astrophysics Data System (ADS)
Yeo, K. L.; Ball, W. T.; Krivova, N. A.; Solanki, S. K.; Unruh, Y. C.; Morrill, J.
2015-08-01
Total solar irradiance and UV spectral solar irradiance has been monitored since 1978 through a succession of space missions. This is accompanied by the development of models aimed at replicating solar irradiance by relating the variability to solar magnetic activity. The Naval Research Laboratory Solar Spectral Irradiance (NRLSSI) and Spectral And Total Irradiance REconstruction for the Satellite era (SATIRE-S) models provide the most comprehensive reconstructions of total and spectral solar irradiance over the period of satellite observation currently available. There is persistent controversy between the various measurements and models in terms of the wavelength dependence of the variation over the solar cycle, with repercussions on our understanding of the influence of UV solar irradiance variability on the stratosphere. We review the measurement and modeling of UV solar irradiance variability over the period of satellite observation. The SATIRE-S reconstruction is consistent with spectral solar irradiance observations where they are reliable. It is also supported by an independent, empirical reconstruction of UV spectral solar irradiance based on Upper Atmosphere Research Satellite/Solar Ultraviolet Spectral Irradiance Monitor measurements from an earlier study. The weaker solar cycle variability produced by NRLSSI between 300 and 400 nm is not evident in any available record. We show that although the method employed to construct NRLSSI is principally sound, reconstructed solar cycle variability is detrimentally affected by the uncertainty in the SSI observations it draws upon in the derivation. Based on our findings, we recommend, when choosing between the two models, the use of SATIRE-S for climate studies.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
Suzuki, Etsuji; Yamamoto, Eiji; Takao, Soshi; Kawachi, Ichiro; Subramanian, S. V.
2012-01-01
Background Multilevel analyses are ideally suited to assess the effects of ecological (higher level) and individual (lower level) exposure variables simultaneously. In applying such analyses to measures of ecologies in epidemiological studies, individual variables are usually aggregated into the higher level unit. Typically, the aggregated measure includes responses of every individual belonging to that group (i.e. it constitutes a self-included measure). More recently, researchers have developed an aggregate measure which excludes the response of the individual to whom the aggregate measure is linked (i.e. a self-excluded measure). In this study, we clarify the substantive and technical properties of these two measures when they are used as exposures in multilevel models. Methods Although the differences between the two aggregated measures are mathematically subtle, distinguishing between them is important in terms of the specific scientific questions to be addressed. We then show how these measures can be used in two distinct types of multilevel models—self-included model and self-excluded model—and interpret the parameters in each model by imposing hypothetical interventions. The concept is tested on empirical data of workplace social capital and employees' systolic blood pressure. Results Researchers assume group-level interventions when using a self-included model, and individual-level interventions when using a self-excluded model. Analytical re-parameterizations of these two models highlight their differences in parameter interpretation. Cluster-mean centered self-included models enable researchers to decompose the collective effect into its within- and between-group components. The benefit of cluster-mean centering procedure is further discussed in terms of hypothetical interventions. Conclusions When investigating the potential roles of aggregated variables, researchers should carefully explore which type of model—self-included or self-excluded—is suitable for a given situation, particularly when group sizes are relatively small. PMID:23251609
Roelen, Corné A M; Stapelfeldt, Christina M; Heymans, Martijn W; van Rhenen, Willem; Labriola, Merete; Nielsen, Claus V; Bültmann, Ute; Jensen, Chris
2015-06-01
To validate Dutch prognostic models including age, self-rated health and prior sickness absence (SA) for ability to predict high SA in Danish eldercare. The added value of work environment variables to the models' risk discrimination was also investigated. 2,562 municipal eldercare workers (95% women) participated in the Working in Eldercare Survey. Predictor variables were measured by questionnaire at baseline in 2005. Prognostic models were validated for predictions of high (≥30) SA days and high (≥3) SA episodes retrieved from employer records during 1-year follow-up. The accuracy of predictions was assessed by calibration graphs and the ability of the models to discriminate between high- and low-risk workers was investigated by ROC-analysis. The added value of work environment variables was measured with Integrated Discrimination Improvement (IDI). 1,930 workers had complete data for analysis. The models underestimated the risk of high SA in eldercare workers and the SA episodes model had to be re-calibrated to the Danish data. Discrimination was practically useful for the re-calibrated SA episodes model, but not the SA days model. Physical workload improved the SA days model (IDI = 0.40; 95% CI 0.19-0.60) and psychosocial work factors, particularly the quality of leadership (IDI = 0.70; 95% CI 053-0.86) improved the SA episodes model. The prognostic model predicting high SA days showed poor performance even after physical workload was added. The prognostic model predicting high SA episodes could be used to identify high-risk workers, especially when psychosocial work factors are added as predictor variables.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Modeling longitudinal data, I: principles of multivariate analysis.
Ravani, Pietro; Barrett, Brendan; Parfrey, Patrick
2009-01-01
Statistical models are used to study the relationship between exposure and disease while accounting for the potential role of other factors' impact on outcomes. This adjustment is useful to obtain unbiased estimates of true effects or to predict future outcomes. Statistical models include a systematic component and an error component. The systematic component explains the variability of the response variable as a function of the predictors and is summarized in the effect estimates (model coefficients). The error element of the model represents the variability in the data unexplained by the model and is used to build measures of precision around the point estimates (confidence intervals).
Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats
2015-05-01
Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Uncertainty analyses of the calibrated parameter values of a water quality model
NASA Astrophysics Data System (ADS)
Rode, M.; Suhr, U.; Lindenschmidt, K.-E.
2003-04-01
For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.
Decadal Variability of Surface Incident Solar Radiation over China
NASA Astrophysics Data System (ADS)
Wang, Kaicun
2015-04-01
Observations have reported a widespread dimming of surface incident solar radiation (Rs) from the 1950s to the 1980s and a brightening afterwards. However, none of the state-of-the-art earth system models, including those from the Coupled Model Intercomparison Project phase 5 (CMIP5), could successfully reproduce the dimming/brightening rates over China. This study provides metadata and reference data to investigate the observed variability of Rs in China. From 1958 to 1990, diffuse solar radiation (Rsdif) and direct solar radiation (Rsdir) was measured separately in China, from which Rs was calculated a sum. However, pyranometers used to measure Rsdif had a strong sensitivity drift problem, which introduced a spurious decreasing trend to Rsdif and Rs measurements. The observed Rsdir did not suffer from such sensitivity drift problem. From 1990 to 1993, the old instruments were replaced and measuring stations were relocated in China, which introduced an abrupt increase in the observed Rs. After 1993, Rs was measured by solid black thermopile pyranometers. Comprehensive comparisons between observation-based and model-based Rs performed in this research have shown that sunshine duration (SunDu)-derived Rs is of high quality and provide accurate estimate of decadal variability of Rs over China. SunDu-derived Rs averaged over 105 stations in China decreased at -2.9 W m-2 per decade from 1961 to 1990 and remained stable afterward. This decadal variability has been confirmed by the observed Rsdir, independent studies on aerosols and diurnal temperature range, and can be reproduced by certain high-quality earth system models. However, neither satellite retrievals (the Global Energy and Water Exchanges Project Surface Radiation Budget (GEWEX SRB)) nor reanalyses (ERA-Interim and Modern-Era Retrospective analysis for Research and Applications (MERRA)) can accurately reproduce such decadal variability of Rs over China for their exclusion of annual variability of tropospheric aerosols.
NASA Astrophysics Data System (ADS)
Badawi, Ahmed M.; Weiss, Elisabeth; Sleeman, William C., IV; Hugo, Geoffrey D.
2012-01-01
The purpose of this study is to develop and evaluate a lung tumour interfraction geometric variability classification scheme as a means to guide adaptive radiotherapy and improve measurement of treatment response. Principal component analysis (PCA) was used to generate statistical shape models of the gross tumour volume (GTV) for 12 patients with weekly breath hold CT scans. Each eigenmode of the PCA model was classified as ‘trending’ or ‘non-trending’ depending on whether its contribution to the overall GTV variability included a time trend over the treatment course. Trending eigenmodes were used to reconstruct the original semi-automatically delineated GTVs into a reduced model containing only time trends. Reduced models were compared to the original GTVs by analyzing the reconstruction error in the GTV and position. Both retrospective (all weekly images) and prospective (only the first four weekly images) were evaluated. The average volume difference from the original GTV was 4.3% ± 2.4% for the trending model. The positional variability of the GTV over the treatment course, as measured by the standard deviation of the GTV centroid, was 1.9 ± 1.4 mm for the original GTVs, which was reduced to 1.2 ± 0.6 mm for the trending-only model. In 3/13 cases, the dominant eigenmode changed class between the prospective and retrospective models. The trending-only model preserved GTV and shape relative to the original GTVs, while reducing spurious positional variability. The classification scheme appears feasible for separating types of geometric variability by time trend.
Determination of riverbank erosion probability using Locally Weighted Logistic Regression
NASA Astrophysics Data System (ADS)
Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos
2015-04-01
Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
NASA Astrophysics Data System (ADS)
Canion, Andy; MacIntyre, Hugh L.; Phipps, Scott
2013-10-01
The inputs of primary productivity models may be highly variable on short timescales (hourly to daily) in turbid estuaries, but modeling of productivity in these environments is often implemented with data collected over longer timescales. Daily, seasonal, and spatial variability in primary productivity model parameters: chlorophyll a concentration (Chla), the downwelling light attenuation coefficient (kd), and photosynthesis-irradiance response parameters (Pmchl, αChl) were characterized in Weeks Bay, a nitrogen-impacted shallow estuary in the northern Gulf of Mexico. Variability in primary productivity model parameters in response to environmental forcing, nutrients, and microalgal taxonomic marker pigments were analysed in monthly and short-term datasets. Microalgal biomass (as Chla) was strongly related to total phosphorus concentration on seasonal scales. Hourly data support wind-driven resuspension as a major source of short-term variability in Chla and light attenuation (kd). The empirical relationship between areal primary productivity and a combined variable of biomass and light attenuation showed that variability in the photosynthesis-irradiance response contributed little to the overall variability in primary productivity, and Chla alone could account for 53-86% of the variability in primary productivity. Efforts to model productivity in similar shallow systems with highly variable microalgal biomass may benefit the most by investing resources in improving spatial and temporal resolution of chlorophyll a measurements before increasing the complexity of models used in productivity modeling.
Hydrologic Remote Sensing and Land Surface Data Assimilation.
Moradkhani, Hamid
2008-05-06
Accurate, reliable and skillful forecasting of key environmental variables such as soil moisture and snow are of paramount importance due to their strong influence on many water resources applications including flood control, agricultural production and effective water resources management which collectively control the behavior of the climate system. Soil moisture is a key state variable in land surface-atmosphere interactions affecting surface energy fluxes, runoff and the radiation balance. Snow processes also have a large influence on land-atmosphere energy exchanges due to snow high albedo, low thermal conductivity and considerable spatial and temporal variability resulting in the dramatic change on surface and ground temperature. Measurement of these two variables is possible through variety of methods using ground-based and remote sensing procedures. Remote sensing, however, holds great promise for soil moisture and snow measurements which have considerable spatial and temporal variability. Merging these measurements with hydrologic model outputs in a systematic and effective way results in an improvement of land surface model prediction. Data Assimilation provides a mechanism to combine these two sources of estimation. Much success has been attained in recent years in using data from passive microwave sensors and assimilating them into the models. This paper provides an overview of the remote sensing measurement techniques for soil moisture and snow data and describes the advances in data assimilation techniques through the ensemble filtering, mainly Ensemble Kalman filter (EnKF) and Particle filter (PF), for improving the model prediction and reducing the uncertainties involved in prediction process. It is believed that PF provides a complete representation of the probability distribution of state variables of interests (according to sequential Bayes law) and could be a strong alternative to EnKF which is subject to some limitations including the linear updating rule and assumption of jointly normal distribution of errors in state variables and observation.
Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2012-01-01
A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…
Meta-Analysis of Scale Reliability Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2013-01-01
A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…
Simulation of crop yield variability by improved root-soil-interaction modelling
NASA Astrophysics Data System (ADS)
Duan, X.; Gayler, S.; Priesack, E.
2009-04-01
Understanding the processes and factors that govern the within-field variability in crop yield has attached great importance due to applications in precision agriculture. Crop response to environment at field scale is a complex dynamic process involving the interactions of soil characteristics, weather conditions and crop management. The numerous static factors combined with temporal variations make it very difficult to identify and manage the variability pattern. Therefore, crop simulation models are considered to be useful tools in analyzing separately the effects of change in soil or weather conditions on the spatial variability, in order to identify the cause of yield variability and to quantify the spatial and temporal variation. However, tests showed that usual crop models such as CERES-Wheat and CERES-Maize were not able to quantify the observed within-field yield variability, while their performance on crop growth simulation under more homogeneous and mainly non-limiting conditions was sufficent to simulate average yields at the field-scale. On a study site in South Germany, within-field variability in crop growth has been documented since years. After detailed analysis and classification of the soil patterns, two site specific factors, the plant-available-water and the O2 deficiency, were considered as the main causes of the crop growth variability in this field. Based on our measurement of root distribution in the soil profile, we hypothesize that in our case the insufficiency of the applied crop models to simulate the yield variability can be due to the oversimplification of the involved root models which fail to be sensitive to different soil conditions. In this study, the root growth model described by Jones et al. (1991) was adapted by using data of root distributions in the field and linking the adapted root model to the CERES crop model. The ability of the new root model to increase the sensitivity of the CERES crop models to different enviromental conditions was then evaluated by means of comparison of the simualtion results with measured data and by scenario calculations.
Zigler, S.J.; Newton, T.J.; Steuer, J.J.; Bartsch, M.R.; Sauer, J.S.
2008-01-01
Interest in understanding physical and hydraulic factors that might drive distribution and abundance of freshwater mussels has been increasing due to their decline throughout North America. We assessed whether the spatial distribution of unionid mussels could be predicted from physical and hydraulic variables in a reach of the Upper Mississippi River. Classification and regression tree (CART) models were constructed using mussel data compiled from various sources and explanatory variables derived from GIS coverages. Prediction success of CART models for presence-absence of mussels ranged from 71 to 76% across three gears (brail, sled-dredge, and dive-quadrat) and 51% of the deviance in abundance. Models were largely driven by shear stress and substrate stability variables, but interactions with simple physical variables, especially slope, were also important. Geospatial models, which were based on tree model results, predicted few mussels in poorly connected backwater areas (e.g., floodplain lakes) and the navigation channel, whereas main channel border areas with high geomorphic complexity (e.g., river bends, islands, side channel entrances) and small side channels were typically favorable to mussels. Moreover, bootstrap aggregation of discharge-specific regression tree models of dive-quadrat data indicated that variables measured at low discharge were about 25% more predictive (PMSE = 14.8) than variables measured at median discharge (PMSE = 20.4) with high discharge (PMSE = 17.1) variables intermediate. This result suggests that episodic events such as droughts and floods were important in structuring mussel distributions. Although the substantial mussel and ancillary data in our study reach is unusual, our approach to develop exploratory statistical and geospatial models should be useful even when data are more limited. ?? 2007 Springer Science+Business Media B.V.
Reliability and validity of electrothermometers and associated thermocouples.
Jutte, Lisa S; Knight, Kenneth L; Long, Blaine C
2008-02-01
Examine thermocouple model uncertainty (reliability+validity). First, a 3x3 repeated measures design with independent variables electrothermometers and thermocouple model. Second, a 1x3 repeated measures design with independent variable subprobe. Three electrothermometers, 3 thermocouple models, a multi-sensor probe and a mercury thermometer measured a stable water bath. Temperature and absolute temperature differences between thermocouples and a mercury thermometer. Thermocouple uncertainty was greater than manufactures'claims. For all thermocouple models, validity and reliability were better in the Iso-Themex than the Datalogger, but there were no practical differences between models within an electrothermometers. Validity of multi-sensor probes and thermocouples within a probe were not different but were greater than manufacturers'claims. Reliability of multiprobes and thermocouples within a probe were within manufacturers claims. Thermocouple models vary in reliability and validity. Scientists should test and report the uncertainty of their equipment rather than depending on manufactures' claims.
Lee, Jimin; Hustad, Katherine C.; Weismer, Gary
2014-01-01
Purpose Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystem approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Method Nine acoustic variables reflecting different subsystems, and speech intelligibility, were measured in 22 children with CP. These children included 13 with a clinical diagnosis of dysarthria (SMI), and nine judged to be free of dysarthria (NSMI). Data from children with CP were compared to data from age-matched typically developing children (TD). Results Multiple acoustic variables reflecting the articulatory subsystem were different in the SMI group, compared to the NSMI and TD groups. A significant speech intelligibility prediction model was obtained with all variables entered into the model (Adjusted R-squared = .801). The articulatory subsystem showed the most substantial independent contribution (58%) to speech intelligibility. Incremental R-squared analyses revealed that any single variable explained less than 9% of speech intelligibility variability. Conclusions Children in the SMI group have articulatory subsystem problems as indexed by acoustic measures. As in the adult literature, the articulatory subsystem makes the primary contribution to speech intelligibility variance in dysarthria, with minimal or no contribution from other systems. PMID:24824584
Lee, Jimin; Hustad, Katherine C; Weismer, Gary
2014-10-01
Speech acoustic characteristics of children with cerebral palsy (CP) were examined with a multiple speech subsystems approach; speech intelligibility was evaluated using a prediction model in which acoustic measures were selected to represent three speech subsystems. Nine acoustic variables reflecting different subsystems, and speech intelligibility, were measured in 22 children with CP. These children included 13 with a clinical diagnosis of dysarthria (speech motor impairment [SMI] group) and 9 judged to be free of dysarthria (no SMI [NSMI] group). Data from children with CP were compared to data from age-matched typically developing children. Multiple acoustic variables reflecting the articulatory subsystem were different in the SMI group, compared to the NSMI and typically developing groups. A significant speech intelligibility prediction model was obtained with all variables entered into the model (adjusted R2 = .801). The articulatory subsystem showed the most substantial independent contribution (58%) to speech intelligibility. Incremental R2 analyses revealed that any single variable explained less than 9% of speech intelligibility variability. Children in the SMI group had articulatory subsystem problems as indexed by acoustic measures. As in the adult literature, the articulatory subsystem makes the primary contribution to speech intelligibility variance in dysarthria, with minimal or no contribution from other systems.
Hwang, Samuel Suk-Hyun; Ahn, Yong Min; Kim, Yong Sik
2015-12-30
Based on the neuropsychological deficit model of insight in schizophrenia, we constructed exploratory prediction models for insight, designating neurocognitive measures as the intermediary variables between psychopathology and insight into patients with schizophrenia. The models included the positive, negative, and autistic preoccupation symptoms as primary predictors, and activation symptoms as an intermediary variable for insight. Fifty-six Korean patients, in the acute stage of schizophrenia, completed the Positive and Negative Syndrome Scale, as well as a comprehensive neurocognitive battery of tests at the baseline, 8-weeks, and 1-year follow-ups. Among the neurocognitive measures, the Korean Wechsler Adult Intelligence Scale (K-WAIS) picture arrangement, Controlled Oral Word Association Test (COWAT) perseverative response, and the Continuous Performance Test (CPT) standard error of reaction time showed significant correlations with the symptoms and the insight. When these measures were fitted into the model as intermediaries between the symptoms and the insight, only the perseverative response was found to have a partial mediating effect - both cross-sectionally, and in the 8-week longitudinal change. Overall, the relationship between insight and neurocognitive functioning measures was found to be selective and weak. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.
2017-01-01
SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2018-03-01
The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bidargaddi, Niranjan P; Chetty, Madhu; Kamruzzaman, Joarder
2008-06-01
Profile hidden Markov models (HMMs) based on classical HMMs have been widely applied for protein sequence identification. The formulation of the forward and backward variables in profile HMMs is made under statistical independence assumption of the probability theory. We propose a fuzzy profile HMM to overcome the limitations of that assumption and to achieve an improved alignment for protein sequences belonging to a given family. The proposed model fuzzifies the forward and backward variables by incorporating Sugeno fuzzy measures and Choquet integrals, thus further extends the generalized HMM. Based on the fuzzified forward and backward variables, we propose a fuzzy Baum-Welch parameter estimation algorithm for profiles. The strong correlations and the sequence preference involved in the protein structures make this fuzzy architecture based model as a suitable candidate for building profiles of a given family, since the fuzzy set can handle uncertainties better than classical methods.
NASA Astrophysics Data System (ADS)
Häusler, K.; Hagan, M. E.; Baumgaertner, A. J. G.; Maute, A.; Lu, G.; Doornbos, E.; Bruinsma, S.; Forbes, J. M.; Gasperini, F.
2014-08-01
We report on a new source of tidal variability in the National Center for Atmospheric Research thermosphere-ionosphere-mesosphere-electrodynamics general circulation model (TIME-GCM). Lower boundary forcing of the TIME-GCM for a simulation of November-December 2009 based on 3-hourly Modern-Era Retrospective Analysis for Research and Application (MERRA) reanalysis data includes day-to-day variations in both diurnal and semidiurnal tides of tropospheric origin. Comparison with TIME-GCM results from a heretofore standard simulation that includes climatological tropospheric tides from the global-scale wave model reveal evidence of the impacts of MERRA forcing throughout the model domain, including measurable tidal variability in the TIME-GCM upper thermosphere. Additional comparisons with measurements made by the Gravity field and steady-state Ocean Circulation Explorer satellite show improved TIME-GCM capability to capture day-to-day variations in thermospheric density for the November-December 2009 period with the new MERRA lower boundary forcing.
NASA Astrophysics Data System (ADS)
Wang, Jun; Wang, Yang; Zeng, Hui
2016-01-01
A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.
Model averaging and muddled multimodel inferences.
Cade, Brian S
2015-09-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the t statistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
Model averaging and muddled multimodel inferences
Cade, Brian S.
2015-01-01
Three flawed practices associated with model averaging coefficients for predictor variables in regression models commonly occur when making multimodel inferences in analyses of ecological data. Model-averaged regression coefficients based on Akaike information criterion (AIC) weights have been recommended for addressing model uncertainty but they are not valid, interpretable estimates of partial effects for individual predictors when there is multicollinearity among the predictor variables. Multicollinearity implies that the scaling of units in the denominators of the regression coefficients may change across models such that neither the parameters nor their estimates have common scales, therefore averaging them makes no sense. The associated sums of AIC model weights recommended to assess relative importance of individual predictors are really a measure of relative importance of models, with little information about contributions by individual predictors compared to other measures of relative importance based on effects size or variance reduction. Sometimes the model-averaged regression coefficients for predictor variables are incorrectly used to make model-averaged predictions of the response variable when the models are not linear in the parameters. I demonstrate the issues with the first two practices using the college grade point average example extensively analyzed by Burnham and Anderson. I show how partial standard deviations of the predictor variables can be used to detect changing scales of their estimates with multicollinearity. Standardizing estimates based on partial standard deviations for their variables can be used to make the scaling of the estimates commensurate across models, a necessary but not sufficient condition for model averaging of the estimates to be sensible. A unimodal distribution of estimates and valid interpretation of individual parameters are additional requisite conditions. The standardized estimates or equivalently the tstatistics on unstandardized estimates also can be used to provide more informative measures of relative importance than sums of AIC weights. Finally, I illustrate how seriously compromised statistical interpretations and predictions can be for all three of these flawed practices by critiquing their use in a recent species distribution modeling technique developed for predicting Greater Sage-Grouse (Centrocercus urophasianus) distribution in Colorado, USA. These model averaging issues are common in other ecological literature and ought to be discontinued if we are to make effective scientific contributions to ecological knowledge and conservation of natural resources.
NASA Astrophysics Data System (ADS)
Bastola, S.; Dialynas, Y. G.; Arnone, E.; Bras, R. L.
2014-12-01
The spatial variability of soil, vegetation, topography, and precipitation controls hydrological processes, consequently resulting in high spatio-temporal variability of most of the hydrological variables, such as soil moisture. Limitation in existing measuring system to characterize this spatial variability, and its importance in various application have resulted in a need of reconciling spatially distributed soil moisture evolution model and corresponding measurements. Fully distributed ecohydrological model simulates soil moisture at high resolution soil moisture. This is relevant for range of environmental studies e.g., flood forecasting. They can also be used to evaluate the value of space born soil moisture data, by assimilating them into hydrological models. In this study, fine resolution soil moisture data simulated by a physically-based distributed hydrological model, tRIBS-VEGGIE, is compared with soil moisture data collected during the field campaign in Turkey river basin, Iowa. The soil moisture series at the 2 and 4 inch depth exhibited a more rapid response to rainfall as compared to bottom 8 and 20 inch ones. The spatial variability in two distinct land surfaces of Turkey River, IA, reflects the control of vegetation, topography and soil texture in the characterization of spatial variability. The comparison of observed and simulated soil moisture at various depth showed that model was able to capture the dynamics of soil moisture at a number of gauging stations. Discrepancies are large in some of the gauging stations, which are characterized by rugged terrain and represented, in the model, through large computational units.
Sonntag, Darrell B; Gao, H Oliver; Holmén, Britt A
2008-08-01
A linear mixed model was developed to quantify the variability of particle number emissions from transit buses tested in real-world driving conditions. Two conventional diesel buses and two hybrid diesel-electric buses were tested throughout 2004 under different aftertreatments, fuels, drivers, and bus routes. The mixed model controlled the confounding influence of factors inherent to on-board testing. Statistical tests showed that particle number emissions varied significantly according to the after treatment, bus route, driver, bus type, and daily temperature, with only minor variability attributable to differences between fuel types. The daily setup and operation of the sampling equipment (electrical low pressure impactor) and mini-dilution system contributed to 30-84% of the total random variability of particle measurements among tests with diesel oxidation catalysts. By controlling for the sampling day variability, the model better defined the differences in particle emissions among bus routes. In contrast, the low particle number emissions measured with diesel particle filters (decreased by over 99%) did not vary according to operating conditions or bus type but did vary substantially with ambient temperature.
Intractable Ménière's disease. Modelling of the treatment by means of statistical analysis.
Sanchez-Ferrandiz, Noelia; Fernandez-Gonzalez, Secundino; Guillen-Grima, Francisco; Perez-Fernandez, Nicolas
2010-08-01
To evaluate the value of different variables of the clinical history, auditory and vestibular tests and handicap measurements to define intractable or disabling Ménière's disease. This is a prospective study with 212 patients of which 155 were treated with intratympanic gentamicin and considered to be suffering a medically intractable Ménière's disease. Age and sex adjustments were performed with the 11 variables selected. Discriminant analysis was performed either using the aforementioned variables or following the stepwise method. Different variables needed to be sex and/or age adjusted and both data were included in the discriminant function. Two different mathematical formulas were obtained and four models were analyzed. With the model selected, diagnostic accuracy is 77.7%, sensitivity is 94.9% and specificity is 52.8%. After discriminant analysis we found that the most informative variables were the number of vertigo spells, the speech discrimination score, the time constant of the VOR and a measure of handicap, the "dizziness index". Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
Measuring the efficiencies of visiting nurse service agencies using data envelopment analysis.
Kuwahara, Yuki; Nagata, Satoko; Taguchi, Atsuko; Naruse, Takashi; Kawaguchi, Hiroyuki; Murashima, Sachiyo
2013-09-01
This study develops a measure of the efficiency of visiting nurse (VN) agencies in Japan, examining the issues related to the measurement of efficiency, and identifying the characteristics that influence efficiency. We have employed a data envelopment analysis to measure the efficiency of 108 VN agencies, using the numbers of 5 types of staff as the input variables and the numbers of 3 types of visits as the output variables. The median efficiency scores of the VN agencies were found to be 0.80 and 1.00 according to the constant returns to scale (CRS) and variable returns to scale (VRS) models, respectively, and the median scale efficiency score was 0.95. This study supports using both the CRS and VRS models to measure the scale efficiency of VN service agencies. We also found that relatively efficient VN agencies filled at least 30 % of staff positions with experienced workers, and so concluded that this characteristic has a direct influence on the length of visits.
NASA Technical Reports Server (NTRS)
Cook, A. B.; Fuller, C. R.; O'Brien, W. F.; Cabell, R. H.
1992-01-01
A method of indirectly monitoring component loads through common flight variables is proposed which requires an accurate model of the underlying nonlinear relationships. An artificial neural network (ANN) model learns relationships through exposure to a database of flight variable records and corresponding load histories from an instrumented military helicopter undergoing standard maneuvers. The ANN model, utilizing eight standard flight variables as inputs, is trained to predict normalized time-varying mean and oscillatory loads on two critical components over a range of seven maneuvers. Both interpolative and extrapolative capabilities are demonstrated with agreement between predicted and measured loads on the order of 90 percent to 95 percent. This work justifies pursuing the ANN method of predicting loads from flight variables.
NASA Astrophysics Data System (ADS)
Ray, E. A.; Daniel, J. S.; Montzka, S. A.; Portmann, R. W.; Yu, P.; Rosenlof, K. H.; Moore, F. L.
2017-12-01
We use surface measurements of a number of long-lived trace gases, including CFC-11, CFC-12 and N2O, and a 3-box model to estimate the interannual variability of bulk stratospheric transport characteristics. Coherent features among the different surface measurements suggest that there have been periods over the last two decades with significant variability in the amount of stratospheric loss transported downward to the troposphere both globally and between the NH and SH. This is especially apparent around the year 2000 and in the recent period since 2013 when surface measurements suggest an overall slowdown of the transport of stratospheric air to the troposphere as well as a shift towards a relatively stronger stratospheric circulation in the SH compared to the NH. We compare these results to stratospheric satellite measurements, residual circulation estimates and global model simulations to check for consistency. The implications of not accounting for interannual variability in stratospheric loss transported to the surface in emission estimates of long-lived trace gases can be significant, including for those gases monitored by the Montreal Protocol and/or of climatic importance.
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
NASA Astrophysics Data System (ADS)
Ransom, K.; Nolan, B. T.; Faunt, C. C.; Bell, A.; Gronberg, J.; Traum, J.; Wheeler, D. C.; Rosecrans, C.; Belitz, K.; Eberts, S.; Harter, T.
2016-12-01
A hybrid, non-linear, machine learning statistical model was developed within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface in the Central Valley, California. A database of 213 predictor variables representing well characteristics, historical and current field and county scale nitrogen mass balance, historical and current landuse, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6,000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The machine learning method, gradient boosting machine (GBM) was used to screen predictor variables and rank them in order of importance in relation to the groundwater nitrate measurements. The top five most important predictor variables included oxidation/reduction characteristics, historical field scale nitrogen mass balance, climate, and depth to 60 year old water. Twenty-two variables were selected for the final model and final model errors for log-transformed hold-out data were R squared of 0.45 and root mean square error (RMSE) of 1.124. Modeled mean groundwater age was tested separately for error improvement in the model and when included decreased model RMSE by 0.5% compared to the same model without age and by 0.20% compared to the model with all 213 variables. 1D and 2D partial plots were examined to determine how variables behave individually and interact in the model. Some variables behaved as expected: log nitrate decreased with increasing probability of anoxic conditions and depth to 60 year old water, generally decreased with increasing natural landuse surrounding wells and increasing mean groundwater age, generally increased with increased minimum depth to high water table and with increased base flow index value. Other variables exhibited much more erratic or noisy behavior in the model making them more difficult to interpret but highlighting the usefulness of the non-linear machine learning method. 2D interaction plots show probability of anoxic groundwater conditions largely control estimated nitrate concentrations compared to the other predictors.
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
NASA Astrophysics Data System (ADS)
Delpierre, N.; Dufrêne, E.
2009-04-01
With several sites measuring mass and energy turbulent fluxes for more than ten years, the CarboEurope database appears as a valuable resource for addressing the question of the determinism of the interannual variability of carbon (C) and water balances in forests ecosystems. Apart from major climate-driven anomalies during the anomalous 2003 summer and 2007 spring, little is known about the factors driving interannual variability (IAV) of the C balance in forest ecosystems. We used the CASTANEA process-based model to simulate the C and W fluxes and balances of three European evergreen forests for the 2000-2007 period (FRPue Quercus ilex, 44°N; DETha Picea abies, 51°N; FIHyy Pinus sylvestris, 62°N). The model fairly reproduced the day-to-day variability of measured fluxes, accounting for 70-81%, 77-91% and 59-90% of the daily variance of measured NEP, GPP and TER, respectively. However, the model was challenged in representing the IAV of fluxes integrated on an annual time scale. It reproduced ca. 80% of the interannual variance of measured GPP, but no significant relationship could be established between annual measured and modelled NEP or TER. Accordingly, CASTANEA appeared as a suitable tool for disentangling the influence of climate and biological processes on GPP at mutiple time scales. We show that climate and biological processes relative influences on the modelled GPP vary from year to year in European evergreen forests. Water-stress related and phenological processes (i.e. release of the winter thermal constraint on photosynthesis in evergreens) appear as primary drivers for the particular 2003 and 2007 years, respectively, but the relative influence of other climatic factors widely varies for less remarkable years at all sites. We discuss shortcomings of the method, as related to the influence of compensating errors in the simulated fluxes, and assess the causes of the model poor ability to represent the IAV of the annual sums of NEP and TER.
NASA Astrophysics Data System (ADS)
Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan
2012-11-01
The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV = 0.0776, Rc = 0.9777, RMSEP = 0.0963, and Rp = 0.9686 for pH model; RMSECV = 1.3544% w/w, Rc = 0.8871, RMSEP = 1.4946% w/w, and Rp = 0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry.
Bollen, Kenneth A; Noble, Mark D; Adair, Linda S
2013-07-30
The fetal origins hypothesis emphasizes the life-long health impacts of prenatal conditions. Birth weight, birth length, and gestational age are indicators of the fetal environment. However, these variables often have missing data and are subject to random and systematic errors caused by delays in measurement, differences in measurement instruments, and human error. With data from the Cebu (Philippines) Longitudinal Health and Nutrition Survey, we use structural equation models, to explore random and systematic errors in these birth outcome measures, to analyze how maternal characteristics relate to birth outcomes, and to take account of missing data. We assess whether birth weight, birth length, and gestational age are influenced by a single latent variable that we call favorable fetal growth conditions (FFGC) and if so, which variable is most closely related to FFGC. We find that a model with FFGC as a latent variable fits as well as a less parsimonious model that has birth weight, birth length, and gestational age as distinct individual variables. We also demonstrate that birth weight is more reliably measured than is gestational age. FFGCs were significantly influenced by taller maternal stature, better nutritional stores indexed by maternal arm fat and muscle area during pregnancy, higher birth order, avoidance of smoking, and maternal age 20-35 years. Effects of maternal characteristics on newborn weight, length, and gestational age were largely indirect, operating through FFGC. Copyright © 2013 John Wiley & Sons, Ltd.
Bayesian Analysis of Structural Equation Models with Nonlinear Covariates and Latent Variables
ERIC Educational Resources Information Center
Song, Xin-Yuan; Lee, Sik-Yum
2006-01-01
In this article, we formulate a nonlinear structural equation model (SEM) that can accommodate covariates in the measurement equation and nonlinear terms of covariates and exogenous latent variables in the structural equation. The covariates can come from continuous or discrete distributions. A Bayesian approach is developed to analyze the…
Aptitude, Achievement and Competence in Medicine: A Latent Variable Path Model
ERIC Educational Resources Information Center
Collin, V. Terri; Violato, Claudio; Hecker, Kent
2009-01-01
To develop and test a latent variable path model of general achievement, aptitude for medicine and competence in medicine employing data from the Medical College Admission Test (MCAT), pre-medical undergraduate grade point average (UGPA) and demographic characteristics for competence in pre-clinical and measures of competence (United States…
Modeling Time-Dependent Association in Longitudinal Data: A Lag as Moderator Approach
ERIC Educational Resources Information Center
Selig, James P.; Preacher, Kristopher J.; Little, Todd D.
2012-01-01
We describe a straightforward, yet novel, approach to examine time-dependent association between variables. The approach relies on a measurement-lag research design in conjunction with statistical interaction models. We base arguments in favor of this approach on the potential for better understanding the associations between variables by…
Adolescent Substance Use, Sleep, and Academic Achievement: Evidence of Harm Due to Caffeine
ERIC Educational Resources Information Center
James, Jack E.; Kristjansson, Alfgeir Logi; Sigfusdottir, Inga Dora
2011-01-01
Using academic achievement as the key outcome variable, 7377 Icelandic adolescents were surveyed for cigarette smoking, alcohol use, daytime sleepiness, caffeine use, and potential confounders. Structural equation modeling (SEM) was used to examine direct and indirect effects of measured and latent variables in two models: the first with caffeine…
NASA Astrophysics Data System (ADS)
Toda, M.; Yokozawa, M.; Richardson, A. D.; Kohyama, T.
2011-12-01
The effects of wind disturbance on interannual variability in ecosystem CO2 exchange have been assessed in two forests in northern Japan, i.e., a young, even-aged, monocultured, deciduous forest and an uneven-aged mixed forest of evergreen and deciduous trees, including some over 200 years old using eddy covariance (EC) measurements during 2004-2008. The EC measurements have indicated that photosynthetic recovery of trees after a huge typhoon occurred during early September in 2004 activated annual carbon uptake of both forests due to changes in physiological response of tree leaves during their growth stages. However, little have been resolved about what biotic and abiotic factors regulated interannual variability in heat, water and carbon exchange between an atmosphere and forests. In recent years, an inverse modeling analysis has been utilized as a powerful tool to estimate biotic and abiotic parameters that might affect heat, water and CO2 exchange between the atmosphere and forest of a parsimonious physiologically based model. We conducted the Bayesian inverse model analysis for the model with the EC measurements. The preliminary result showed that the above model-derived NEE values were consistent with observed ones on the hourly basis with optimized parameters by Baysian inversion. In the presentation, we would examine interannual variability in biotic and abiotic parameters related to heat, water and carbon exchange between the atmosphere and forests after disturbance by typhoon.
Incorporating additional tree and environmental variables in a lodgepole pine stem profile model
John C. Byrne
1993-01-01
A new variable-form segmented stem profile model is developed for lodgepole pine (Pinus contorta) trees from the northern Rocky Mountains of the United States. I improved estimates of stem diameter by predicting two of the model coefficients with linear equations using a measure of tree form, defined as a ratio of dbh and total height. Additional improvements were...
Lunt, Mark
2015-07-01
In the first article in this series we explored the use of linear regression to predict an outcome variable from a number of predictive factors. It assumed that the predictive factors were measured on an interval scale. However, this article shows how categorical variables can also be included in a linear regression model, enabling predictions to be made separately for different groups and allowing for testing the hypothesis that the outcome differs between groups. The use of interaction terms to measure whether the effect of a particular predictor variable differs between groups is also explained. An alternative approach to testing the difference between groups of the effect of a given predictor, which consists of measuring the effect in each group separately and seeing whether the statistical significance differs between the groups, is shown to be misleading. © The Author 2013. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Bowden, Stephen C.; Saklofske, Donald H.; Weiss, Lawrence G.
2011-01-01
A measurement model describes both the numerical and theoretical relationship between observed scores and the corresponding latent variables or constructs. Testing a measurement model across groups is required to determine if the tests scores are tapping the same constructs so that the same meaning can be ascribed to the scores. Contemporary tests…
Applied Music Teaching Behavior as a Function of Selected Personality Variables.
ERIC Educational Resources Information Center
Schmidt, Charles P.
1989-01-01
Investigates the relationships among applied music teaching behaviors and personality variables as measured by the Myers-Briggs Type Indicator (MBTI). Suggests that personality variables may be important factors underlying four applied music teaching behaviors: approvals, rate of reinforcement, teacher model/performance, and pace. (LS)
Heddam, Salim
2016-09-01
This paper proposes multilayer perceptron neural network (MLPNN) to predict phycocyanin (PC) pigment using water quality variables as predictor. In the proposed model, four water quality variables that are water temperature, dissolved oxygen, pH, and specific conductance were selected as the inputs for the MLPNN model, and the PC as the output. To demonstrate the capability and the usefulness of the MLPNN model, a total of 15,849 data measured at 15-min (15 min) intervals of time are used for the development of the model. The data are collected at the lower Charles River buoy, and available from the US Environmental Protection Agency (USEPA). For comparison purposes, a multiple linear regression (MLR) model that was frequently used for predicting water quality variables in previous studies is also built. The performances of the models are evaluated using a set of widely used statistical indices. The performance of the MLPNN and MLR models is compared with the measured data. The obtained results show that (i) the all proposed MLPNN models are more accurate than the MLR models and (ii) the results obtained are very promising and encouraging for the development of phycocyanin-predictive models.
Model specification in oral health-related quality of life research.
Kieffer, Jacobien M; Verrips, Erik; Hoogstraten, Johan
2009-10-01
The aim of this study was to analyze conventional wisdom regarding the construction and analysis of oral health-related quality of life (OHRQoL) questionnaires and to outline statistical complications. Most methods used for developing and analyzing questionnaires, such as factor analysis and Cronbach's alpha, presume psychological constructs to be latent, inferring a reflective measurement model with the underlying assumption of local independence. Local independence implies that the latent variable explains why the variables observed are related. Many OHRQoL questionnaires are analyzed as if they were based on a reflective measurement model; local independence is thus assumed. This assumption requires these questionnaires to consist solely of items that reflect, instead of determine, OHRQoL. The tenability of this assumption is the main topic of the present study. It is argued that OHRQoL questionnaires are a mix of both a formative measurement model and a reflective measurement model, thus violating the assumption of local independence. The implications are discussed.
2013-01-01
Background Our previous model of the non-isometric muscle fatigue that occurs during repetitive functional electrical stimulation included models of force, motion, and fatigue and accounted for applied load but not stimulation pulse duration. Our objectives were to: 1) further develop, 2) validate, and 3) present outcome measures for a non-isometric fatigue model that can predict the effect of a range of pulse durations on muscle fatigue. Methods A computer-controlled stimulator sent electrical pulses to electrodes on the thighs of 25 able-bodied human subjects. Isometric and non-isometric non-fatiguing and fatiguing knee torques and/or angles were measured. Pulse duration (170–600 μs) was the independent variable. Measurements were divided into parameter identification and model validation subsets. Results The fatigue model was simplified by removing two of three non-isometric parameters. The third remained a function of other model parameters. Between 66% and 77% of the variability in the angle measurements was explained by the new model. Conclusion Muscle fatigue in response to different stimulation pulse durations can be predicted during non-isometric repetitive contractions. PMID:23374142
Beyond a bigger brain: Multivariable structural brain imaging and intelligence
Ritchie, Stuart J.; Booth, Tom; Valdés Hernández, Maria del C.; Corley, Janie; Maniega, Susana Muñoz; Gow, Alan J.; Royle, Natalie A.; Pattie, Alison; Karama, Sherif; Starr, John M.; Bastin, Mark E.; Wardlaw, Joanna M.; Deary, Ian J.
2015-01-01
People with larger brains tend to score higher on tests of general intelligence (g). It is unclear, however, how much variance in intelligence other brain measurements would account for if included together with brain volume in a multivariable model. We examined a large sample of individuals in their seventies (n = 672) who were administered a comprehensive cognitive test battery. Using structural equation modelling, we related six common magnetic resonance imaging-derived brain variables that represent normal and abnormal features—brain volume, cortical thickness, white matter structure, white matter hyperintensity load, iron deposits, and microbleeds—to g and to fluid intelligence. As expected, brain volume accounted for the largest portion of variance (~ 12%, depending on modelling choices). Adding the additional variables, especially cortical thickness (+~ 5%) and white matter hyperintensity load (+~ 2%), increased the predictive value of the model. Depending on modelling choices, all neuroimaging variables together accounted for 18–21% of the variance in intelligence. These results reveal which structural brain imaging measures relate to g over and above the largest contributor, total brain volume. They raise questions regarding which other neuroimaging measures might account for even more of the variance in intelligence. PMID:26240470
Ramezankhani, Azra; Pournik, Omid; Shahrabi, Jamal; Khalili, Davood; Azizi, Fereidoun; Hadaegh, Farzad
2014-09-01
The aim of this study was to create a prediction model using data mining approach to identify low risk individuals for incidence of type 2 diabetes, using the Tehran Lipid and Glucose Study (TLGS) database. For a 6647 population without diabetes, aged ≥20 years, followed for 12 years, a prediction model was developed using classification by the decision tree technique. Seven hundred and twenty-nine (11%) diabetes cases occurred during the follow-up. Predictor variables were selected from demographic characteristics, smoking status, medical and drug history and laboratory measures. We developed the predictive models by decision tree using 60 input variables and one output variable. The overall classification accuracy was 90.5%, with 31.1% sensitivity, 97.9% specificity; and for the subjects without diabetes, precision and f-measure were 92% and 0.95, respectively. The identified variables included fasting plasma glucose, body mass index, triglycerides, mean arterial blood pressure, family history of diabetes, educational level and job status. In conclusion, decision tree analysis, using routine demographic, clinical, anthropometric and laboratory measurements, created a simple tool to predict individuals at low risk for type 2 diabetes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Variability of pCO2 in surface waters and development of prediction model.
Chung, Sewoong; Park, Hyungseok; Yoo, Jisu
2018-05-01
Inland waters are substantial sources of atmospheric carbon, but relevant data are rare in Asian monsoon regions including Korea. Emissions of CO 2 to the atmosphere depend largely on the partial pressure of CO 2 (pCO 2 ) in water; however, measured pCO 2 data are scarce and calculated pCO 2 can show large uncertainty. This study had three objectives: 1) to examine the spatial variability of pCO 2 in diverse surface water systems in Korea; 2) to compare pCO 2 calculated using pH-total alkalinity (Alk) and pH-dissolved inorganic carbon (DIC) with pCO 2 measured by an in situ submersible nondispersive infrared detector; and 3) to characterize the major environmental variables determining the variation of pCO 2 based on physical, chemical, and biological data collected concomitantly. Of 30 samples, 80% were found supersaturated in CO 2 with respect to the overlying atmosphere. Calculated pCO 2 using pH-Alk and pH-DIC showed weak prediction capability and large variations with respect to measured pCO 2 . Error analysis indicated that calculated pCO 2 is highly sensitive to the accuracy of pH measurements, particularly at low pH. Stepwise multiple linear regression (MLR) and random forest (RF) techniques were implemented to develop the most parsimonious model based on 10 potential predictor variables (pH, Alk, DIC, Uw, Cond, Turb, COD, DOC, TOC, Chla) by optimizing model performance. The RF model showed better performance than the MLR model, and the most parsimonious RF model (pH, Turb, Uw, Chla) improved pCO 2 prediction capability considerably compared with the simple calculation approach, reducing the RMSE from 527-544 to 105μatm at the study sites. Copyright © 2017 Elsevier B.V. All rights reserved.
Ferrier, Rachel; Hezard, Bernard; Lintz, Adrienne; Stahl, Valérie
2013-01-01
An individual-based modeling (IBM) approach was developed to describe the behavior of a few Listeria monocytogenes cells contaminating smear soft cheese surface. The IBM approach consisted of assessing the stochastic individual behaviors of cells on cheese surfaces and knowing the characteristics of their surrounding microenvironments. We used a microelectrode for pH measurements and micro-osmolality to assess the water activity of cheese microsamples. These measurements revealed a high variability of microscale pH compared to that of macroscale pH. A model describing the increase in pH from approximately 5.0 to more than 7.0 during ripening was developed. The spatial variability of the cheese surface characterized by an increasing pH with radius and higher pH on crests compared to that of hollows on cheese rind was also modeled. The microscale water activity ranged from approximately 0.96 to 0.98 and was stable during ripening. The spatial variability on cheese surfaces was low compared to between-cheese variability. Models describing the microscale variability of cheese characteristics were combined with the IBM approach to simulate the stochastic growth of L. monocytogenes on cheese, and these simulations were compared to bacterial counts obtained from irradiated cheeses artificially contaminated at different ripening stages. The simulated variability of L. monocytogenes counts with the IBM/microenvironmental approach was consistent with the observed one. Contrasting situations corresponding to no growth or highly contaminated foods could be deduced from these models. Moreover, the IBM approach was more effective than the traditional population/macroenvironmental approach to describe the actual bacterial behavior variability. PMID:23872572
Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G
2016-09-01
A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.
McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D
1999-10-20
Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design.
Body Fat Percentage Prediction Using Intelligent Hybrid Approaches
Shao, Yuehjen E.
2014-01-01
Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804
Probabilities and statistics for backscatter estimates obtained by a scatterometer
NASA Technical Reports Server (NTRS)
Pierson, Willard J., Jr.
1989-01-01
Methods for the recovery of winds near the surface of the ocean from measurements of the normalized radar backscattering cross section must recognize and make use of the statistics (i.e., the sampling variability) of the backscatter measurements. Radar backscatter values from a scatterometer are random variables with expected values given by a model. A model relates backscatter to properties of the waves on the ocean, which are in turn generated by the winds in the atmospheric marine boundary layer. The effective wind speed and direction at a known height for a neutrally stratified atmosphere are the values to be recovered from the model. The probability density function for the backscatter values is a normal probability distribution with the notable feature that the variance is a known function of the expected value. The sources of signal variability, the effects of this variability on the wind speed estimation, and criteria for the acceptance or rejection of models are discussed. A modified maximum likelihood method for estimating wind vectors is described. Ways to make corrections for the kinds of errors found for the Seasat SASS model function are described, and applications to a new scatterometer are given.
NASA Astrophysics Data System (ADS)
Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng
2016-11-01
The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.
The effects of a confidant and a peer group on the well-being of single elders.
Gupta, V; Korte, C
1994-01-01
A study of 100 elderly people was carried out to compare the predictions of well-being derived from the confidant model with those derived from the Weiss model. The confidant model predicts that the most important feature of a person's social network for the well-being of that person is whether or not the person has a confidant. The Weiss model states that different persons are needed to fulfill the different needs of the person and in particular that a confidant is important to the need for intimacy and emotional security while a peer group of social friends is needed to fulfill sociability and identity needs. The two models were evaluated by comparing the relative influence of the confidant variable with the peer group variable on subject's well-being. Regression analysis was carried out on the well-being measure using as predictor variables the confidant variable, peer group variable, age, health, and financial status. The confidant and peer group variables were of equal importance to well-being, thus confirming the Weiss model.
A Latent Transition Analysis Model for Assessing Change in Cognitive Skills
ERIC Educational Resources Information Center
Li, Feiming; Cohen, Allan; Bottge, Brian; Templin, Jonathan
2016-01-01
Latent transition analysis (LTA) was initially developed to provide a means of measuring change in dynamic latent variables. In this article, we illustrate the use of a cognitive diagnostic model, the DINA model, as the measurement model in a LTA, thereby demonstrating a means of analyzing change in cognitive skills over time. An example is…
Kamarianakis, Yiannis; Gao, H Oliver
2010-02-15
Collecting and analyzing high frequency emission measurements has become very usual during the past decade as significantly more information with respect to formation conditions can be collected than from regulated bag measurements. A challenging issue for researchers is the accurate time-alignment between tailpipe measurements and engine operating variables. An alignment procedure should take into account both the reaction time of the analyzers and the dynamics of gas transport in the exhaust and measurement systems. This paper discusses a statistical modeling framework that compensates for variable exhaust transport delay while relating tailpipe measurements with engine operating covariates. Specifically it is shown that some variants of the smooth transition regression model allow for transport delays that vary smoothly as functions of the exhaust flow rate. These functions are characterized by a pair of coefficients that can be estimated via a least-squares procedure. The proposed models can be adapted to encompass inherent nonlinearities that were implicit in previous instantaneous emissions modeling efforts. This article describes the methodology and presents an illustrative application which uses data collected from a diesel bus under real-world driving conditions.
Effort reward imbalance is associated with vagal withdrawal in Danish public sector employees.
Eller, Nanna Hurwitz; Blønd, Morten; Nielsen, Martin; Kristiansen, Jesper; Netterstrøm, Bo
2011-09-01
The current study analyzed the relationship between psychosocial work environment assessed by the Effort Reward Imbalance Model (ERI-model) and heart rate variability (HRV) measured at baseline and again, two years later, as this relationship is scarcely covered by the literature. Measurements of HRV during seated rest were obtained from 231 public sector employees. The associations between the ERI-model, and HRV were examined using a series of mixed effects models. The dependent variables were the logarithmically transformed levels of HRV-measures. Gender and year of measurement were included as factors, whereas age, and time of measurement were included as covariates. Subject was included as a random effect. Effort and effort reward imbalance were positively associated with heart rate and the ratio between low frequency (LF) and high frequency power (HF) and negatively associated with total power (TP) and HF. Reward was positively associated with TP. Adverse psychosocial work environment according to the ERI-model was associated with HRV, especially in the form of vagal withdrawal and most pronounced in women. Copyright © 2011 Elsevier B.V. All rights reserved.
Schmitt, Neal; Golubovich, Juliya; Leong, Frederick T L
2011-12-01
The impact of measurement invariance and the provision for partial invariance in confirmatory factor analytic models on factor intercorrelations, latent mean differences, and estimates of relations with external variables is investigated for measures of two sets of widely assessed constructs: Big Five personality and the six Holland interests (RIASEC). In comparing models that include provisions for partial invariance with models that do not, the results indicate quite small differences in parameter estimates involving the relations between factors, one relatively large standardized mean difference in factors between the subgroups compared and relatively small differences in the regression coefficients when the factors are used to predict external variables. The results provide support for the use of partially invariant models, but there does not seem to be a great deal of difference between structural coefficients when the measurement model does or does not include separate estimates of subgroup parameters that differ across subgroups. Future research should include simulations in which the impact of various factors related to invariance is estimated.
AN IMPROVED STRATEGY FOR REGRESSION OF BIOPHYSICAL VARIABLES AND LANDSAT ETM+ DATA. (R828309)
Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent wood...
A Model for the Correlates of Students' Creative Thinking
ERIC Educational Resources Information Center
Sarsani, Mahender Reddy
2007-01-01
The present study was aimed to explore the relationships between orgainsational or school variables, students' personal background variables, and cognitive and motivational variables. The sample for the survey included 373 students drawn from nine Government schools in Andhra Pradesh, India. Students' creative thinking abilities were measured by…
Do little interactions get lost in dark random forests?
Wright, Marvin N; Ziegler, Andreas; König, Inke R
2016-03-31
Random forests have often been claimed to uncover interaction effects. However, if and how interaction effects can be differentiated from marginal effects remains unclear. In extensive simulation studies, we investigate whether random forest variable importance measures capture or detect gene-gene interactions. With capturing interactions, we define the ability to identify a variable that acts through an interaction with another one, while detection is the ability to identify an interaction effect as such. Of the single importance measures, the Gini importance captured interaction effects in most of the simulated scenarios, however, they were masked by marginal effects in other variables. With the permutation importance, the proportion of captured interactions was lower in all cases. Pairwise importance measures performed about equal, with a slight advantage for the joint variable importance method. However, the overall fraction of detected interactions was low. In almost all scenarios the detection fraction in a model with only marginal effects was larger than in a model with an interaction effect only. Random forests are generally capable of capturing gene-gene interactions, but current variable importance measures are unable to detect them as interactions. In most of the cases, interactions are masked by marginal effects and interactions cannot be differentiated from marginal effects. Consequently, caution is warranted when claiming that random forests uncover interactions.
Presentation a New Model to Measure National Power of the Countries
NASA Astrophysics Data System (ADS)
Hafeznia, Mohammad Reza; Hadi Zarghani, Seyed; Ahmadipor, Zahra; Roknoddin Eftekhari, Abdelreza
In this research, based on the assessment of previous models for the evaluation of national power, a new model is presented to measure national power; it is much better than previous models. Paying attention to all the aspects of national power (economical, social, cultural, political, military, astro-space, territorial, scientific and technological and transnational), paying attention to the usage of 87 factors, stressing the usage of new and strategically compatible variables to the current time are some of the benefits of this model. Also using the Delphi method and referring to the opinions of experts about determining the role and importance of variables affecting national power, the option of drawing out the global power structure are some the other advantages that this model has compared to previous ones.
A procedural model for planning and evaluating behavioral interventions.
Hyner, G C
2005-01-01
A model for planning, implementing and evaluating health behavior change strategies is proposed. Variables are presented which can be used in the model or serve as examples for how the model is utilized once a theory of health behavior is adopted. Examples of three innovative strategies designed to influence behavior change are presented so that the proposed model can be modified for use following comprehensive screening and baseline measurements. Three measurement priorities: clients, methods and agency are subjected to three phases of assessment: goals, implementation and effects. Lifestyles account for the majority of variability in quality-of-life and premature morbidity and mortality. Interventions designed to influence healthy behavior changes must be driven by theory and carefully planned and evaluated. The proposed model is offered as a useful tool for the behavior change strategist.
Morgenroth, S; Thomas, J; Cannizzaro, V; Weiss, M; Schmidt, A R
2018-03-01
Spirometric monitoring provides precise measurement and delivery of tidal volumes within a narrow range, which is essential for lung-protective strategies that aim to reduce morbidity and mortality in mechanically-ventilated patients. Conventional anaesthesia ventilators include inbuilt spirometry to monitor inspiratory and expiratory tidal volumes. The GE Aisys CS 2 anaesthesia ventilator allows additional near-patient spirometry via a sensor interposed between the proximal end of the tracheal tube and the respiratory tubing. Near-patient and inbuilt spirometry of two different GE Aisys CS 2 anaesthesia ventilators were compared in an in-vitro study. Assessments were made of accuracy and variability in inspiratory and expiratory tidal volume measurements during ventilation of six simulated paediatric lung models using the ASL 5000 test lung. A total of 9240 breaths were recorded and analysed. Differences between inspiratory tidal volumes measured with near-patient and inbuilt spirometry were most significant in the newborn setting (p < 0.001), and became less significant with increasing age and weight. During expiration, tidal volume measurements with near-patient spirometry were consistently more accurate than with inbuilt spirometry for all lung models (p < 0.001). Overall, the variability in measured tidal volumes decreased with increasing tidal volumes, and was smaller with near-patient than with inbuilt spirometry. The variability in measured tidal volumes was higher during expiration, especially with inbuilt spirometry. In conclusion, the present in-vitro study shows that measurements with near-patient spirometry are more accurate and less variable than with inbuilt spirometry. Differences between measurement methods were most significant in the smallest patients. We therefore recommend near-patient spirometry, especially for neonatal and paediatric patients. © 2018 The Association of Anaesthetists of Great Britain and Ireland.
Madden, J M; O'Flynn, A M; Dolan, E; Fitzgerald, A P; Kearney, P M
2015-12-01
Blood pressure variability (BPV) has been associated with cardiovascular events; however, the prognostic significance of short-term BPV remains uncertain. As uncertainty also remains as to which measure of variability most accurately describes short-term BPV, this study explores different indices and investigates their relationship with subclinical target organ damage (TOD). We used data from the Mitchelstown Study, a cross-sectional study of Irish adults aged 47-73 years (n=2047). A subsample (1207) underwent 24-h ambulatory BP monitoring (ABPM). As measures of short-term BPV, we estimated the s.d., weighted s.d. (wSD), coefficient of variation (CV) and average real variability (ARV). TOD was documented by microalbuminuria and electrocardiogram (ECG) left ventricular hypertrophy (LVH). There was no association found between any measure of BPV and LVH in both unadjusted and fully adjusted logistic regression models. Similar analysis found that ARV (24 h, day and night), s.d. (day and night) and wSD were all univariately associated with microalbuminuria and remained associated after adjustment for age, gender, smoking, body mass index (BMI), diabetes and antihypertensive treatment. However, when the models were further adjusted for the mean BP the association did not persist for all indices. Our findings illustrate choosing the appropriate summary measure, which accurately captures that short-term BPV is difficult. Despite discrepancies in values between the different measures, there was no association between any indexes of variability with TOD measures after adjustment for the mean BP.
Spatio-temporal variation of urban ultrafine particle number concentrations
NASA Astrophysics Data System (ADS)
Ragettli, Martina S.; Ducret-Stich, Regina E.; Foraster, Maria; Morelli, Xavier; Aguilera, Inmaculada; Basagaña, Xavier; Corradi, Elisabetta; Ineichen, Alex; Tsai, Ming-Yi; Probst-Hensch, Nicole; Rivera, Marcela; Slama, Rémy; Künzli, Nino; Phuleria, Harish C.
2014-10-01
Methods are needed to characterize short-term exposure to ultrafine particle number concentrations (UFP) for epidemiological studies on the health effects of traffic-related UFP. Our aims were to assess season-specific spatial variation of short-term (20-min) UFP within the city of Basel, Switzerland, and to develop hybrid models for predicting short-term median and mean UFP levels on sidewalks. We collected measurements of UFP for periods of 20 min (MiniDiSC particle counter) and determined traffic volume along sidewalks at 60 locations across the city, during non-rush hours in three seasons. For each monitoring location, detailed spatial characteristics were locally recorded and potential predictor variables were derived from geographic information systems (GIS). We built multivariate regression models to predict local UFP, using concurrent UFP levels measured at a suburban background station, and combinations of meteorological, temporal, GIS and observed site characteristic variables. For a subset of sites, we assessed the relationship between UFP measured on the sidewalk and at the nearby residence (i.e., home outdoor exposure on e.g. balconies). The average median 20-min UFP levels at street and urban background sites were 14,700 ± 9100 particles cm-3 and 9900 ± 8600 particles cm-3, respectively, with the highest levels occurring in winter and the lowest in summer. The most important predictor for all models was the suburban background UFP concentration, explaining 50% and 38% of the variability of the median and mean, respectively. While the models with GIS-derived variables (R2 = 0.61) or observed site characteristics (R2 = 0.63) predicted median UFP levels equally well, mean UFP predictions using only site characteristic variables (R2 = 0.62) showed a better fit than models using only GIS variables (R2 = 0.55). The best model performance was obtained by using a combination of GIS-derived variables and locally observed site characteristics (median: R2 = 0.66; mean: R2 = 0.65). The 20-min UFP concentrations measured at the sidewalk were strongly related (R2 = 0.8) to the concurrent 20-min residential UFP levels nearby. Our results indicate that median UFP can be moderately predicted by means of a suburban background site and GIS-derived traffic and land use variables. In areas and regions where large-scale GIS data are not available, the spatial distribution of traffic-related UFP may be assessed reasonably well by collecting on-site short-term traffic and land-use data.
Analysis of the thermal comfort model in an environment of metal mechanical branch.
Pinto, N M; Xavier, A A P; do Amaral, Regiane T
2012-01-01
This study aims to identify the correlation between the Predicted Mean Vote (PMV) with the thermal sensation (S) of 55 employees, establishing a linear multiple regression equation. The measurement of environmental variables followed established standards. The survey was conducted in a metal industry located in Ponta Grossa of the State of Parana in Brazil. It was applied the physical model of thermal comfort to the environmental variables and also to the subjective data on the thermal sensations of employees. The survey was conducted from May to November, 2010, with 48 measurements. This study will serve as the basis for a dissertation consisting of 72 measurements.
Avoiding and Correcting Bias in Score-Based Latent Variable Regression with Discrete Manifest Items
ERIC Educational Resources Information Center
Lu, Irene R. R.; Thomas, D. Roland
2008-01-01
This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…
Zhaohua Dai; Carl C. Trettin; Changsheng Li; Ge Sun; Devendra M. Amatya; Harbin Li
2013-01-01
The impacts of hurricane disturbance and climate variability on carbon dynamics in a coastal forested wetland in South Carolina of USA were simulated using the Forest-DNDC model with a spatially explicit approach. The model was validated using the measured biomass before and after Hurricane Hugo and the biomass inventories in 2006 and 2007, showed that the Forest-DNDC...
ERIC Educational Resources Information Center
Ludtke, Oliver; Marsh, Herbert W.; Robitzsch, Alexander; Trautwein, Ulrich
2011-01-01
In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data…
Measurement problem and local hidden variables with entangled photons
NASA Astrophysics Data System (ADS)
Muchowski, Eugen
2017-12-01
It is shown that there is no remote action with polarization measurements of photons in singlet state. A model is presented introducing a hidden parameter which determines the polarizer output. This model is able to explain the polarization measurement results with entangled photons. It is not ruled out by Bell's Theorem.
Mechanistic materials modeling for nuclear fuel performance
Tonks, Michael R.; Andersson, David; Phillpot, Simon R.; ...
2017-03-15
Fuel performance codes are critical tools for the design, certification, and safety analysis of nuclear reactors. However, their ability to predict fuel behavior under abnormal conditions is severely limited by their considerable reliance on empirical materials models correlated to burn-up (a measure of the number of fission events that have occurred, but not a unique measure of the history of the material). In this paper, we propose a different paradigm for fuel performance codes to employ mechanistic materials models that are based on the current state of the evolving microstructure rather than burn-up. In this approach, a series of statemore » variables are stored at material points and define the current state of the microstructure. The evolution of these state variables is defined by mechanistic models that are functions of fuel conditions and other state variables. The material properties of the fuel and cladding are determined from microstructure/property relationships that are functions of the state variables and the current fuel conditions. Multiscale modeling and simulation is being used in conjunction with experimental data to inform the development of these models. Finally, this mechanistic, microstructure-based approach has the potential to provide a more predictive fuel performance capability, but will require a team of researchers to complete the required development and to validate the approach.« less
New approach to probability estimate of femoral neck fracture by fall (Slovak regression model).
Wendlova, J
2009-01-01
3,216 Slovak women with primary or secondary osteoporosis or osteopenia, aged 20-89 years, were examined with the bone densitometer DXA (dual energy X-ray absorptiometry, GE, Prodigy - Primo), x = 58.9, 95% C.I. (58.42; 59.38). The values of the following variables for each patient were measured: FSI (femur strength index), T-score total hip left, alpha angle - left, theta angle - left, HAL (hip axis length) left, BMI (body mass index) was calculated from the height and weight of the patients. Regression model determined the following order of independent variables according to the intensity of their influence upon the occurrence of values of dependent FSI variable: 1. BMI, 2. theta angle, 3. T-score total hip, 4. alpha angle, 5. HAL. The regression model equation, calculated from the variables monitored in the study, enables a doctor in praxis to determine the probability magnitude (absolute risk) for the occurrence of pathological value of FSI (FSI < 1) in the femoral neck area, i. e., allows for probability estimate of a femoral neck fracture by fall for Slovak women. 1. The Slovak regression model differs from regression models, published until now, in chosen independent variables and a dependent variable, belonging to biomechanical variables, characterising the bone quality. 2. The Slovak regression model excludes the inaccuracies of other models, which are not able to define precisely the current and past clinical condition of tested patients (e.g., to define the length and dose of exposure to risk factors). 3. The Slovak regression model opens the way to a new method of estimating the probability (absolute risk) or the odds for a femoral neck fracture by fall, based upon the bone quality determination. 4. It is assumed that the development will proceed by improving the methods enabling to measure the bone quality, determining the probability of fracture by fall (Tab. 6, Fig. 3, Ref. 22). Full Text (Free, PDF) www.bmj.sk.
NASA Astrophysics Data System (ADS)
Li, Lianfa; Wu, Anna H.; Cheng, Iona; Chen, Jiu-Chiuan; Wu, Jun
2017-10-01
Monitoring of fine particulate matter with diameter <2.5 μm (PM2.5) started from 1999 in the US and even later in many other countries. The lack of historical PM2.5 data limits epidemiological studies of long-term exposure of PM2.5 and health outcomes such as cancer. In this study, we aimed to design a flexible approach to reliably estimate historical PM2.5 concentrations by incorporating spatial effect and the measurements of existing co-pollutants such as particulate matter with diameter <10 μm (PM10) and meteorological variables. Monitoring data of PM10, PM2.5, and meteorological variables covering the entire state of California were obtained from 1999 through 2013. We developed a spatiotemporal model that quantified non-linear associations between PM2.5 concentrations and the following predictor variables: spatiotemporal factors (PM10 and meteorological variables), spatial factors (land-use patterns, traffic, elevation, distance to shorelines, and spatial autocorrelation), and season. Our model accounted for regional-(county) scale spatial autocorrelation, using spatial weight matrix, and local-scale spatiotemporal variability, using local covariates in additive non-linear model. The spatiotemporal model was evaluated, using leaving-one-site-month-out cross validation. Our final daily model had an R2 of 0.81, with PM10, meteorological variables, and spatial autocorrelation, explaining 55%, 10%, and 10% of the variance in PM2.5 concentrations, respectively. The model had a cross-validation R2 of 0.83 for monthly PM2.5 concentrations (N = 8170) and 0.79 for daily PM2.5 concentrations (N = 51,421) with few extreme values in prediction. Further, the incorporation of spatial effects reduced bias in predictions. Our approach achieved a cross validation R2 of 0.61 for the daily model when PM10 was replaced by total suspended particulate. Our model can robustly estimate historical PM2.5 concentrations in California when PM2.5 measurements were not available.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Baldwin, Austin K.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher
2013-01-01
In 2008, the U.S. Geological Survey and the Milwaukee Metropolitan Sewerage District initiated a study to develop regression models to estimate real-time concentrations and loads of chloride, suspended solids, phosphorus, and bacteria in streams near Milwaukee, Wisconsin. To collect monitoring data for calibration of models, water-quality sensors and automated samplers were installed at six sites in the Menomonee River drainage basin. The sensors continuously measured four potential explanatory variables: water temperature, specific conductance, dissolved oxygen, and turbidity. Discrete water-quality samples were collected and analyzed for five response variables: chloride, total suspended solids, total phosphorus, Escherichia coli bacteria, and fecal coliform bacteria. Using the first year of data, regression models were developed to continuously estimate the response variables on the basis of the continuously measured explanatory variables. Those models were published in a previous report. In this report, those models are refined using 2 years of additional data, and the relative improvement in model predictability is discussed. In addition, a set of regression models is presented for a new site in the Menomonee River Basin, Underwood Creek at Wauwatosa. The refined models use the same explanatory variables as the original models. The chloride models all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity. Total suspended solids and total phosphorus models used turbidity as the only explanatory variable, and bacteria models used water temperature and turbidity as explanatory variables. An analysis of covariance (ANCOVA), used to compare the coefficients in the original models to those in the refined models calibrated using all of the data, showed that only 3 of the 25 original models changed significantly. Root-mean-squared errors (RMSEs) calculated for both the original and refined models using the entire dataset showed a median improvement in RMSE of 2.1 percent, with a range of 0.0–13.9 percent. Therefore most of the original models did almost as well at estimating concentrations during the validation period (October 2009–September 2011) as the refined models, which were calibrated using those data. Application of these refined models can produce continuously estimated concentrations of chloride, total suspended solids, total phosphorus, E. coli bacteria, and fecal coliform bacteria that may assist managers in quantifying the effects of land-use changes and improvement projects, establish total maximum daily loads, and enable better informed decision making in the future.
Planillo, Aimara; Malo, Juan E
2018-01-01
Human disturbance is widespread across landscapes in the form of roads that alter wildlife populations. Knowing which road features are responsible for the species response and their relevance in comparison with environmental variables will provide useful information for effective conservation measures. We sampled relative abundance of European rabbits, a very widespread species, in motorway verges at regional scale, in an area with large variability in environmental and infrastructure conditions. Environmental variables included vegetation structure, plant productivity, distance to water sources, and altitude. Infrastructure characteristics were the type of vegetation in verges, verge width, traffic volume, and the presence of embankments. We performed a variance partitioning analysis to determine the relative importance of two sets of variables on rabbit abundance. Additionally, we identified the most important variables and their effects model averaging after model selection by AICc on hypothesis-based models. As a group, infrastructure features explained four times more variability in rabbit abundance than environmental variables, being the effects of the former critical in motorway stretches located in altered landscapes with no available habitat for rabbits, such as agricultural fields. Model selection and Akaike weights showed that verge width and traffic volume are the most important variables explaining rabbit abundance index, with positive and negative effects, respectively. In the light of these results, the response of species to the infrastructure can be modulated through the modification of motorway features, being some of them manageable in the design phase. The identification of such features leads to suggestions for improvement through low-cost corrective measures and conservation plans. As a general indication, keeping motorway verges less than 10 m wide will prevent high densities of rabbits and avoid the unwanted effects that rabbit populations can generate in some areas.
Gulliver, John; Morley, David; Dunster, Chrissi; McCrea, Adrienne; van Nunen, Erik; Tsai, Ming-Yi; Probst-Hensch, Nicoltae; Eeftens, Marloes; Imboden, Medea; Ducret-Stich, Regina; Naccarati, Alessio; Galassi, Claudia; Ranzi, Andrea; Nieuwenhuijsen, Mark; Curto, Ariadna; Donaire-Gonzalez, David; Cirach, Marta; Vermeulen, Roel; Vineis, Paolo; Hoek, Gerard; Kelly, Frank J
2018-01-01
Oxidative potential (OP) of particulate matter (PM) is proposed as a biologically-relevant exposure metric for studies of air pollution and health. We aimed to evaluate the spatial variability of the OP of measured PM 2.5 using ascorbate (AA) and (reduced) glutathione (GSH), and develop land use regression (LUR) models to explain this spatial variability. We estimated annual average values (m -3 ) of OP AA and OP GSH for five areas (Basel, CH; Catalonia, ES; London-Oxford, UK (no OP GSH ); the Netherlands; and Turin, IT) using PM 2.5 filters. OP AA and OP GSH LUR models were developed using all monitoring sites, separately for each area and combined-areas. The same variables were then used in repeated sub-sampling of monitoring sites to test sensitivity of variable selection; new variables were offered where variables were excluded (p > .1). On average, measurements of OP AA and OP GSH were moderately correlated (maximum Pearson's maximum Pearson's R = = .7) with PM 2.5 and other metrics (PM 2.5 absorbance, NO 2 , Cu, Fe). HOV (hold-out validation) R 2 for OP AA models was .21, .58, .45, .53, and .13 for Basel, Catalonia, London-Oxford, the Netherlands and Turin respectively. For OP GSH , the only model achieving at least moderate performance was for the Netherlands (R 2 = .31). Combined models for OP AA and OP GSH were largely explained by study area with weak local predictors of intra-area contrasts; we therefore do not endorse them for use in epidemiologic studies. Given the moderate correlation of OP AA with other pollutants, the three reasonably performing LUR models for OP AA could be used independently of other pollutant metrics in epidemiological studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Goode, C; LeRoy, J; Allen, D G
2007-01-01
This study reports on a multivariate analysis of the moving bed biofilm reactor (MBBR) wastewater treatment system at a Canadian pulp mill. The modelling approach involved a data overview by principal component analysis (PCA) followed by partial least squares (PLS) modelling with the objective of explaining and predicting changes in the BOD output of the reactor. Over two years of data with 87 process measurements were used to build the models. Variables were collected from the MBBR control scheme as well as upstream in the bleach plant and in digestion. To account for process dynamics, a variable lagging approach was used for variables with significant temporal correlations. It was found that wood type pulped at the mill was a significant variable governing reactor performance. Other important variables included flow parameters, faults in the temperature or pH control of the reactor, and some potential indirect indicators of biomass activity (residual nitrogen and pH out). The most predictive model was found to have an RMSEP value of 606 kgBOD/d, representing a 14.5% average error. This was a good fit, given the measurement error of the BOD test. Overall, the statistical approach was effective in describing and predicting MBBR treatment performance.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
2015-01-07
vector that helps to manage , predict, and mitigate the risk in the original variable. Residual risk can be exemplified as a quantification of the improved... the random variable of interest is viewed in concert with a related random vector that helps to manage , predict, and mitigate the risk in the original...measures of risk. They view a random variable of interest in concert with an auxiliary random vector that helps to manage , predict and mitigate the risk
NASA Astrophysics Data System (ADS)
Singh, A.; Serbin, S.; Kucharik, C. J.; Townsend, P. A.
2014-12-01
Ecosystem models such AgroIBIS require detailed parameterizations of numerous vegetation traits related to leaf structure, biochemistry and photosynthetic capacity to properly assess plant carbon assimilation and yield response to environmental variability. In general, these traits are estimated from a limited number of field measurements or sourced from the literature, but rarely is the full observed range of variability in these traits utilized in modeling activities. In addition, pathogens and pests, such as the exotic soybean aphid (Aphis glycines), which affects photosynthetic pathways in soybean plants by feeding on phloem and sap, can potentially impact plant productivity and yields. Capturing plant responses to pest pressure in conjunction with environmental variability is of considerable interest to managers and the scientific community alike. In this research, we employed full-range (400-2500 nm) field and laboratory spectroscopy to rapidly characterize the leaf biochemical and physiological traits, namely foliar nitrogen, specific leaf area (SLA) and the maximum rate of RuBP carboxylation by the enzyme RuBisCo (Vcmax) in soybean plants, which experienced a broad range of environmental conditions and soybean aphid pressures. We utilized near-surface spectroscopic remote sensing measurements as a means to capture the spatial and temporal patterns of aphid impacts across broad aphid pressure levels. In addition, we used the spectroscopic data to generate a much larger dataset of key model parameters required by AgroIBIS than would be possible through traditional measurements of biochemistry and leaf-level gas exchange. The use of spectroscopic retrievals of soybean traits allowed us to better characterize the variability of plant responses associated with aphid pressure to more accurately model the likely impacts of soybean aphid on soybeans. Our next steps include the coupling of the information derived from our spectral measurements with the AgroIBIS model to project the impacts of increasing aphid pressures on yields expected with continued global change and altered environmental conditions.
A General Multidimensional Model for the Measurement of Cultural Differences.
ERIC Educational Resources Information Center
Olmedo, Esteban L.; Martinez, Sergio R.
A multidimensional model for measuring cultural differences (MCD) based on factor analytic theory and techniques is proposed. The model assumes that a cultural space may be defined by means of a relatively small number of orthogonal dimensions which are linear combinations of a much larger number of cultural variables. Once a suitable,…
A Multilevel CFA-MTMM Model for Nested Structurally Different Methods
ERIC Educational Resources Information Center
Koch, Tobias; Schultze, Martin; Burrus, Jeremy; Roberts, Richard D.; Eid, Michael
2015-01-01
The numerous advantages of structural equation modeling (SEM) for the analysis of multitrait-multimethod (MTMM) data are well known. MTMM-SEMs allow researchers to explicitly model the measurement error, to examine the true convergent and discriminant validity of the given measures, and to relate external variables to the latent trait as well as…
NASA Astrophysics Data System (ADS)
Gebler, S.; Hendricks Franssen, H.-J.; Kollet, S. J.; Qu, W.; Vereecken, H.
2017-04-01
The prediction of the spatial and temporal variability of land surface states and fluxes with land surface models at high spatial resolution is still a challenge. This study compares simulation results using TerrSysMP including a 3D variably saturated groundwater flow model (ParFlow) coupled to the Community Land Model (CLM) of a 38 ha managed grassland head-water catchment in the Eifel (Germany), with soil water content (SWC) measurements from a wireless sensor network, actual evapotranspiration recorded by lysimeters and eddy covariance stations and discharge observations. TerrSysMP was discretized with a 10 × 10 m lateral resolution, variable vertical resolution (0.025-0.575 m), and the following parameterization strategies of the subsurface soil hydraulic parameters: (i) completely homogeneous, (ii) homogeneous parameters for different soil horizons, (iii) different parameters for each soil unit and soil horizon and (iv) heterogeneous stochastic realizations. Hydraulic conductivity and Mualem-Van Genuchten parameters in these simulations were sampled from probability density functions, constructed from either (i) soil texture measurements and Rosetta pedotransfer functions (ROS), or (ii) estimated soil hydraulic parameters by 1D inverse modelling using shuffle complex evolution (SCE). The results indicate that the spatial variability of SWC at the scale of a small headwater catchment is dominated by topography and spatially heterogeneous soil hydraulic parameters. The spatial variability of the soil water content thereby increases as a function of heterogeneity of soil hydraulic parameters. For lower levels of complexity, spatial variability of the SWC was underrepresented in particular for the ROS-simulations. Whereas all model simulations were able to reproduce the seasonal evapotranspiration variability, the poor discharge simulations with high model bias are likely related to short-term ET dynamics and the lack of information about bedrock characteristics and an on-site drainage system in the uncalibrated model. In general, simulation performance was better for the SCE setups. The SCE-simulations had a higher inverse air entry parameter resulting in SWC dynamics in better correspondence with data than the ROS simulations during dry periods. This illustrates that small scale measurements of soil hydraulic parameters cannot be transferred to the larger scale and that interpolated 1D inverse parameter estimates result in an acceptable performance for the catchment.
Kasser, Susan L; Goldstein, Amanda; Wood, Phillip K; Sibold, Jeremy
2017-04-01
Individuals with multiple sclerosis (MS) experience a clinical course that is highly variable with daily fluctuations in symptoms significantly affecting functional ability and quality of life. Yet, understanding how MS symptoms co-vary and associate with physical and psychological health is unclear. The purpose of the study was to explore variability patterns and time-bound relationships across symptoms, affect, and physical activity in individuals with MS. The study employed a multivariate, replicated, single-subject repeated-measures (MRSRM) design and involved four individuals with MS. Mood, fatigue, pain, balance confidence, and losses of balance were measured daily over 28 days by self-report. Physical activity was also measured daily over this same time period via accelerometry. Dynamic factor analysis (DFA) was used to determine the dimensionality and lagged relationships across the variables. Person-specific models revealed considerable time-dependent co-variation patterns as well as pattern variation across subjects. Results also offered insight into distinct variability structures at varying levels of disability. Modeling person-level variability may be beneficial for addressing the heterogeneity of experiences in individuals with MS and for understanding temporal and dynamic interrelationships among perceived symptoms, affect, and health outcomes in this group. Copyright © 2016 Elsevier Inc. All rights reserved.
Difference between manual and digital measurements of dental arches of orthodontic patients.
Jiménez-Gayosso, Sandra Isabel; Lara-Carrillo, Edith; López-González, Saraí; Medina-Solís, Carlo Eduardo; Scougall-Vilchis, Rogelio José; Hernández-Martínez, César Tadeo; Colomé-Ruiz, Gabriel Eduardo; Escoffié-Ramirez, Mauricio
2018-06-01
The objective of this study was to compare the differences between the measurements performed manually to those obtained using a digital model scanner of patients with orthodontic treatment.A cross-sectional study was performed in a sample of 30 study models from patients with permanent dentition who attended a university clinic between January 2010 and December 2015. For the digital measurement, a Maestro 3D Ortho Studio scanner (Italy) was used and Mitutoyo electronic Vernier calipers (Kawasaki, Japan) were used for manual measurement. The outcome variables were the measurements for maxillary intercanine width, mandibular intercanine width, maxillary intermolar width, mandibular intermolar width, overjet, overbite, maxillary arch perimeter, mandibular arch perimeter, and palate height. The independent variables, besides age and sex, were a series of arc characteristics. The Student t test, paired Student t test, and Pearson correlation in SPSS version 19 were used for the analysis.Of the models, 60% were from women. Two of nine measurements for pre-treatment and 6 of 9 measurements for post-treatment showed a difference. The variables that were different between the manual and digital measurements in the pre-treatment were maxillary intermolar width and palate height (P < .05). Post-treatment, differences were found in mandibular intercanine width, palate height, overjet, overbite, and maxillary and mandibular arch perimeter (P < .05).The models measured manually and digitally showed certain similarities for both vertical and transverse measurements. There are many advantages offered to the orthodontist, such as easy storage; savings in time and space; facilitating the reproducibility of information; and conferring the security of not deteriorating over time. Its main disadvantage is the cost.
Short-term landfill methane emissions dependency on wind.
Delkash, Madjid; Zhou, Bowen; Han, Byunghyun; Chow, Fotini K; Rella, Chris W; Imhoff, Paul T
2016-09-01
Short-term (2-10h) variations of whole-landfill methane emissions have been observed in recent field studies using the tracer dilution method for emissions measurement. To investigate the cause of these variations, the tracer dilution method is applied using 1-min emissions measurements at Sandtown Landfill (Delaware, USA) for a 2-h measurement period. An atmospheric dispersion model is developed for this field test site, which is the first application of such modeling to evaluate atmospheric effects on gas plume transport from landfills. The model is used to examine three possible causes of observed temporal emissions variability: temporal variability of surface wind speed affecting whole landfill emissions, spatial variability of emissions due to local wind speed variations, and misaligned tracer gas release and methane emissions locations. At this site, atmospheric modeling indicates that variation in tracer dilution method emissions measurements may be caused by whole-landfill emissions variation with wind speed. Field data collected over the time period of the atmospheric model simulations corroborate this result: methane emissions are correlated with wind speed on the landfill surface with R(2)=0.51 for data 2.5m above ground, or R(2)=0.55 using data 85m above ground, with emissions increasing by up to a factor of 2 for an approximately 30% increase in wind speed. Although the atmospheric modeling and field test are conducted at a single landfill, the results suggest that wind-induced emissions may affect tracer dilution method emissions measurements at other landfills. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model
Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.
2016-01-01
Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903
Impact of spatial variability and sampling design on model performance
NASA Astrophysics Data System (ADS)
Schrape, Charlotte; Schneider, Anne-Kathrin; Schröder, Boris; van Schaik, Loes
2017-04-01
Many environmental physical and chemical parameters as well as species distributions display a spatial variability at different scales. In case measurements are very costly in labour time or money a choice has to be made between a high sampling resolution at small scales and a low spatial cover of the study area or a lower sampling resolution at the small scales resulting in local data uncertainties with a better spatial cover of the whole area. This dilemma is often faced in the design of field sampling campaigns for large scale studies. When the gathered field data are subsequently used for modelling purposes the choice of sampling design and resulting data quality influence the model performance criteria. We studied this influence with a virtual model study based on a large dataset of field information on spatial variation of earthworms at different scales. Therefore we built a virtual map of anecic earthworm distributions over the Weiherbach catchment (Baden-Württemberg in Germany). First of all the field scale abundance of earthworms was estimated using a catchment scale model based on 65 field measurements. Subsequently the high small scale variability was added using semi-variograms, based on five fields with a total of 430 measurements divided in a spatially nested sampling design over these fields, to estimate the nugget, range and standard deviation of measurements within the fields. With the produced maps, we performed virtual samplings of one up to 50 random points per field. We then used these data to rebuild the catchment scale models of anecic earthworm abundance with the same model parameters as in the work by Palm et al. (2013). The results of the models show clearly that a large part of the non-explained deviance of the models is due to the very high small scale variability in earthworm abundance: the models based on single virtual sampling points on average obtain an explained deviance of 0.20 and a correlation coefficient of 0.64. With increasing sampling points per field, we averaged the measured abundance of the sampling within each field to obtain a more representative value of the field average. Doubling the samplings per field strongly improved the model performance criteria (explained deviance 0.38 and correlation coefficient 0.73). With 50 sampling points per field the performance criteria were 0.91 and 0.97 respectively for explained deviance and correlation coefficient. The relationship between number of samplings and performance criteria can be described with a saturation curve. Beyond five samples per field the model improvement becomes rather small. With this contribution we wish to discuss the impact of data variability at sampling scale on model performance and the implications for sampling design and assessment of model results as well as ecological inferences.
Health-Related Quality of Life in a Predictive Model for Mortality in Older Breast Cancer Survivors.
DuMontier, Clark; Clough-Gorr, Kerri M; Silliman, Rebecca A; Stuck, Andreas E; Moser, André
2018-03-13
To develop a predictive model and risk score for 10-year mortality using health-related quality of life (HRQOL) in a cohort of older women with early-stage breast cancer. Prospective cohort. Community. U.S. women aged 65 and older diagnosed with Stage I to IIIA primary breast cancer (N=660). We used medical variables (age, comorbidity), HRQOL measures (10-item Physical Function Index and 5-item Mental Health Index from the Medical Outcomes Study (MOS) 36-item Short-Form Survey; 8-item Modified MOS Social Support Survey), and breast cancer variables (stage, surgery, chemotherapy, endocrine therapy) to develop a 10-year mortality risk score using penalized logistic regression models. We assessed model discriminative performance using the area under the receiver operating characteristic curve (AUC), calibration performance using the Hosmer-Lemeshow test, and overall model performance using Nagelkerke R 2 (NR). Compared to a model including only age, comorbidity, and cancer stage and treatment variables, adding HRQOL variables improved discrimination (AUC 0.742 from 0.715) and overall performance (NR 0.221 from 0.190) with good calibration (p=0.96 from HL test). In a cohort of older women with early-stage breast cancer, HRQOL measures predict 10-year mortality independently of traditional breast cancer prognostic variables. These findings suggest that interventions aimed at improving physical function, mental health, and social support might improve both HRQOL and survival. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.
Roos, Paulien E.; Dingwell, Jonathan B.
2013-01-01
Falls are common in older adults. The most common cause of falls is tripping while walking. Simulation studies demonstrated that older adults may be restricted by lower limb strength and movement speed to regain balance after a trip. This review examines how modeling approaches can be used to determine how different measures predict actual fall risk and what some of the causal mechanisms of fall risk are. Although increased gait variability predicts increased fall risk experimentally, it is not clear which variability measures could best be used, or what magnitude of change corresponded with increased fall risk. With a simulation study we showed that the increase in fall risk with a certain increase in gait variability was greatly influenced by the initial level of variability. Gait variability can therefore not easily be used to predict fall risk. We therefore explored other measures that may be related to fall risk and investigated the relationship between stability measures such as Floquet multipliers and local divergence exponents and actual fall risk in a dynamic walking model. We demonstrated that short-term local divergence exponents were a good early predictor for fall risk. Neuronal noise increases with age. It has however not been fully understood if increased neuronal noise would cause an increased fall risk. With our dynamic walking model we showed that increased neuronal noise caused increased fall risk. Although people who are at increased risk of falling reduce their walking speed it had been questioned whether this slower speed would actually cause a reduced fall risk. With our model we demonstrated that a reduced walking speed caused a reduction in fall risk. This may be due to the decreased kinematic variability as a result of the reduced signal-dependent noise of the smaller muscle forces that are required for slower. These insights may be used in the development of fall prevention programs in order to better identify those at increased risk of falling and to target those factors that influence fall risk most. PMID:24120280
Roos, Paulien E; Dingwell, Jonathan B
2013-10-01
Falls are common in older adults. The most common cause of falls is tripping while walking. Simulation studies demonstrated that older adults may be restricted by lower limb strength and movement speed to regain balance after a trip. This review examines how modeling approaches can be used to determine how different measures predict actual fall risk and what some of the causal mechanisms of fall risk are. Although increased gait variability predicts increased fall risk experimentally, it is not clear which variability measures could best be used, or what magnitude of change corresponded with increased fall risk. With a simulation study we showed that the increase in fall risk with a certain increase in gait variability was greatly influenced by the initial level of variability. Gait variability can therefore not easily be used to predict fall risk. We therefore explored other measures that may be related to fall risk and investigated the relationship between stability measures such as Floquet multipliers and local divergence exponents and actual fall risk in a dynamic walking model. We demonstrated that short-term local divergence exponents were a good early predictor for fall risk. Neuronal noise increases with age. It has however not been fully understood if increased neuronal noise would cause an increased fall risk. With our dynamic walking model we showed that increased neuronal noise caused increased fall risk. Although people who are at increased risk of falling reduce their walking speed it had been questioned whether this slower speed would actually cause a reduced fall risk. With our model we demonstrated that a reduced walking speed caused a reduction in fall risk. This may be due to the decreased kinematic variability as a result of the reduced signal-dependent noise of the smaller muscle forces that are required for slower. These insights may be used in the development of fall prevention programs in order to better identify those at increased risk of falling and to target those factors that influence fall risk most. Copyright © 2013 Elsevier B.V. All rights reserved.
Variable Stiffness Panel Structural Analyses With Material Nonlinearity and Correlation With Tests
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Gurdal, Zafer
2006-01-01
Results from structural analyses of three tow-placed AS4/977-3 composite panels with both geometric and material nonlinearities are presented. Two of the panels have variable stiffness layups where the fiber orientation angle varies as a continuous function of location on the panel planform. One variable stiffness panel has overlapping tow bands of varying thickness, while the other has a theoretically uniform thickness. The third panel has a conventional uniform-thickness [plus or minus 45](sub 5s) layup with straight fibers, providing a baseline for comparing the performance of the variable stiffness panels. Parametric finite element analyses including nonlinear material shear are first compared with material characterization test results for two orthotropic layups. This nonlinear material model is incorporated into structural analysis models of the variable stiffness and baseline panels with applied end shortenings. Measured geometric imperfections and mechanical prestresses, generated by forcing the variable stiffness panels from their cured anticlastic shapes into their flatter test configurations, are also modeled. Results of these structural analyses are then compared to the measured panel structural response. Good correlation is observed between the analysis results and displacement test data throughout deep postbuckling up to global failure, suggesting that nonlinear material behavior is an important component of the actual panel structural response.
Ancilla-driven quantum computation for qudits and continuous variables
NASA Astrophysics Data System (ADS)
Proctor, Timothy; Giulian, Melissa; Korolkova, Natalia; Andersson, Erika; Kendon, Viv
2017-05-01
Although qubits are the leading candidate for the basic elements in a quantum computer, there are also a range of reasons to consider using higher-dimensional qudits or quantum continuous variables (QCVs). In this paper, we use a general "quantum variable" formalism to propose a method of quantum computation in which ancillas are used to mediate gates on a well-isolated "quantum memory" register and which may be applied to the setting of qubits, qudits (for d >2 ), or QCVs. More specifically, we present a model in which universal quantum computation may be implemented on a register using only repeated applications of a single fixed two-body ancilla-register interaction gate, ancillas prepared in a single state, and local measurements of these ancillas. In order to maintain determinism in the computation, adaptive measurements via a classical feed forward of measurement outcomes are used, with the method similar to that in measurement-based quantum computation (MBQC). We show that our model has the same hybrid quantum-classical processing advantages as MBQC, including the power to implement any Clifford circuit in essentially one layer of quantum computation. In some physical settings, high-quality measurements of the ancillas may be highly challenging or not possible, and hence we also present a globally unitary model which replaces the need for measurements of the ancillas with the requirement for ancillas to be prepared in states from a fixed orthonormal basis. Finally, we discuss settings in which these models may be of practical interest.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
NASA Technical Reports Server (NTRS)
Alag, Gurbux S.; Gilyard, Glenn B.
1990-01-01
To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.
Mountain Hydrology of the Semi-Arid Western U.S.: Research Needs, Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Bales, R.; Dozier, J.; Molotch, N.; Painter, T.; Rice, R.
2004-12-01
In the semi-arid Western U.S., water resources are being stressed by the combination of climate warming, changing land use, and population growth. Multiple consensus planning documents point to this region as perhaps the highest priority for new hydrologic understanding. Three main hydrologic issues illustrate research needs in the snow-driven hydrology of the region. First, despite the hydrologic importance of mountainous regions, the processes controlling their energy, water and biogeochemical fluxes are not well understood. Second, there exists a need to realize, at various spatial and temporal scales, the feedback systems between hydrological fluxes and biogeochemical and ecological processes. Third, the paucity of adequate observation networks in mountainous regions hampers improvements in understanding these processes. For example, we lack an adequate description of factors controlling the partitioning of snowmelt into runoff versus infiltration and evapotranspiration, and need strategies to accurately measure the variability of precipitation, snow cover and soil moisture. The amount of mountain-block and mountain-front recharge and how recharge patterns respond to climate variability are poorly known across the mountainous West. Moreover, hydrologic modelers and those measuring important hydrologic variables from remote sensing and distributed in situ sites have failed to bridge rifts between modeling needs and available measurements. Research and operational communities will benefit from data fusion/integration, improved measurement arrays, and rapid data access. For example, the hydrologic modeling community would advance if given new access to single rather than disparate sources of bundles of cutting-edge remote sensing retrievals of snow covered area and albedo, in situ measurements of snow water equivalent and precipitation, and spatio-temporal fields of variables that drive models. In addition, opportunities exist for the deployment of new technologies, taking advantage of research in spatially distributed sensor networks that can enhance data recovery and analysis.
Entropy of stable seasonal rainfall distribution in Kelantan
NASA Astrophysics Data System (ADS)
Azman, Muhammad Az-zuhri; Zakaria, Roslinazairimah; Satari, Siti Zanariah; Radi, Noor Fadhilah Ahmad
2017-05-01
Investigating the rainfall variability is vital for any planning and management in many fields related to water resources. Climate change can gives an impact of water availability and may aggravate water scarcity in the future. Two statistics measurements which have been used by many researchers to measure the rainfall variability are variance and coefficient of variation. However, these two measurements are insufficient since rainfall distribution in Malaysia especially in the East Coast of Peninsular Malaysia is not symmetric instead it is positively skewed. In this study, the entropy concept is used as a tool to measure the seasonal rainfall variability in Kelantan and ten rainfall stations were selected. In previous studies, entropy of stable rainfall (ESR) and apportionment entropy (AE) were used to describe the rainfall amount variability during years for Australian rainfall data. In this study, the entropy of stable seasonal rainfall (ESSR) is suggested to model rainfall amount variability during northeast monsoon (NEM) and southwest monsoon (SWM) seasons in Kelantan. The ESSR is defined to measure the long-term average seasonal rainfall amount variability within a given year (1960-2012). On the other hand, the AE measures the rainfall amounts variability across the months. The results of ESSR and AE values show that stations in east coastline are more variable as compared to other stations inland for Kelantan rainfall. The contour maps of ESSR for Kelantan rainfall stations are also presented.
Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape
ERIC Educational Resources Information Center
Stein, Jerrold L.
2007-01-01
Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…
Least Principal Components Analysis (LPCA): An Alternative to Regression Analysis.
ERIC Educational Resources Information Center
Olson, Jeffery E.
Often, all of the variables in a model are latent, random, or subject to measurement error, or there is not an obvious dependent variable. When any of these conditions exist, an appropriate method for estimating the linear relationships among the variables is Least Principal Components Analysis. Least Principal Components are robust, consistent,…
Three Cs in measurement models: causal indicators, composite indicators, and covariates.
Bollen, Kenneth A; Bauldry, Shawn
2011-09-01
In the last 2 decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that one can classify indicators into 2 categories: effect (reflective) indicators and causal (formative) indicators. We argue that the dichotomous view is too simple. Instead, there are effect indicators and 3 types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the "Three Cs"). Causal indicators have conceptual unity, and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variables. Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects, and composites are a matter of convenience. The failure to distinguish the Three Cs has led to confusion and questions, such as, Are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points.
Gao, Nuo; Zhu, S A; He, Bin
2005-06-07
We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.
On the modelling of scalar and mass transport in combustor flows
NASA Technical Reports Server (NTRS)
Nikjooy, M.; So, R. M. C.
1989-01-01
Results are presented of a numerical study of swirling and nonswirling combustor flows with and without density variations. Constant-density arguments are used to justify closure assumptions invoked for the transport equations for turbulent momentum and scalar fluxes, which are written in terms of density-weighted variables. Comparisons are carried out with measurements obtained from three different axisymmetric model combustor experiments covering recirculating flow, swirling flow, and variable-density swirling flow inside the model combustors. Results show that the Reynolds stress/flux models do a credible job of predicting constant-density swirling and nonswirling combustor flows with passive scalar transport. However, their improvements over algebraic stress/flux models are marginal. The extension of the constant-density models to variable-density flow calculations shows that the models are equally valid for such flows.
Assessing medication effects in the MTA study using neuropsychological outcomes.
Epstein, Jeffery N; Conners, C Keith; Hervey, Aaron S; Tonev, Simon T; Arnold, L Eugene; Abikoff, Howard B; Elliott, Glen; Greenhill, Laurence L; Hechtman, Lily; Hoagwood, Kimberly; Hinshaw, Stephen P; Hoza, Betsy; Jensen, Peter S; March, John S; Newcorn, Jeffrey H; Pelham, William E; Severe, Joanne B; Swanson, James M; Wells, Karen; Vitiello, Benedetto; Wigal, Timothy
2006-05-01
While studies have increasingly investigated deficits in reaction time (RT) and RT variability in children with attention deficit/hyperactivity disorder (ADHD), few studies have examined the effects of stimulant medication on these important neuropsychological outcome measures. 316 children who participated in the Multimodal Treatment Study of Children with ADHD (MTA) completed the Conners' Continuous Performance Test (CPT) at the 24-month assessment point. Outcome measures included standard CPT outcomes (e.g., errors of commission, mean hit reaction time (RT)) and RT indicators derived from an Ex-Gaussian distributional model (i.e., mu, sigma, and tau). Analyses revealed significant effects of medication across all neuropsychological outcome measures. Results on the Ex-Gaussian outcome measures revealed that stimulant medication slows RT and reduces RT variability. This demonstrates the importance of including analytic strategies that can accurately model the actual distributional pattern, including the positive skew. Further, the results of the study relate to several theoretical models of ADHD.
Angular position of the cleat according to torsional parameters of the cyclist's lower limb.
Ramos-Ortega, Javier; Domínguez, Gabriel; Castillo, José Manuel; Fernández-Seguín, Lourdes; Munuera, Pedro V
2014-05-01
The aim of this work was to study the relationship of torsional and rotational parameters of the lower limb with a specific angular position of the cleat to establish whether these variables affect the adjustment of the cleat. Correlational study. Motion analysis laboratory. Thirty-seven male cyclists of high performance. The variables studied of the cyclist's lower limb were hip rotation (internal and external), tibial torsion angle, Q angle, and forefoot adductus angle. The cleat angle was measured through a photograph of the sole and with an Rx of this using the software AutoCAD 2008. The variables were photograph angle (photograph), the variable denominated cleat-tarsus minor angle, and a variable denominated cleat-second metatarsal angle (Rx). Analysis included the intraclass correlation coefficient for the reliability of the measurements, Student's t test performed on the dependent variables to compare side, and the multiple linear regression models were calculated using the software SPSS 15.0 for Windows. The Student's t test performed on the dependent variables to compare side showed no significant differences (P = 0.209 for the photograph angle, P = 0.735 for the cleat-tarsus minor angle, and P = 0.801 for the cleat-second metatarsal angle). Values of R and R2 for the photograph angle model were 0.303 and 0.092 (P = 0.08), the cleat/tarsus minor angle model were 0.683 and 0.466 (P < 0.001), and the cleat/second metatarsal angle model were 0.618 and 0.382, respectively (P < 0.001). The equation given by the model was cleat-tarsus minor angle = 75.094 - (0.521 × forefoot adductus angle) + (0.116 × outward rotation of the hips) + (0.220 × Q angle).
2011-01-01
Efforts for drug free sport include developing a better understanding of the behavioural determinants that underline doping with an increased interest in developing anti-doping prevention and intervention programmes. Empirical testing of both is dominated by self-report questionnaires, which is the most widely used method in psychological assessments and sociology polls. Disturbingly, the potential distorting effect of socially desirable responding (SD) is seldom considered in doping research, or dismissed based on weak correlation between some SD measure and the variables of interest. The aim of this report is to draw attention to i) the potential distorting effect of SD and ii) the limitation of using correlation analysis between a SD measure and the individual measures. Models of doping opinion as a potentially contentious issue was tested using structural equation modeling technique (SEM) with and without the SD variable, on a dataset of 278 athletes, assessing the SD effect both at the i) indicator and ii) construct levels, as well as iii) testing SD as an independent variable affecting expressed doping opinion. Participants were categorised by their SD score into high- and low SD groups. Based on low correlation coefficients (<|0.22|) observed in the overall sample, SD effect on the indicator variables could be disregarded. Regression weights between predictors and the outcome variable varied between groups with high and low SD but despite the practically non-existing relationship between SD and predictors (<|0.11|) in the low SD group, both groups showed improved model fit with SD, independently. The results of this study clearly demonstrate the presence of SD effect and the inadequacy of the commonly used pairwise correlation to assess social desirability at model level. In the absence of direct observation of the target behaviour (i.e. doping use), evaluation of the effectiveness of future anti-doping campaign, along with empirical testing of refined doping behavioural models, will likely to continue to rely on self-reported information. Over and above controlling the effect of socially desirable responding in research that makes inferences based on self-reported information on social cognitive and behavioural measures, it is recommended that SD effect is appropriately assessed during data analysis. PMID:21244663
Jiang, Hui; Liu, Guohai; Mei, Congli; Yu, Shuang; Xiao, Xiahong; Ding, Yuhan
2012-11-01
The feasibility of rapid determination of the process variables (i.e. pH and moisture content) in solid-state fermentation (SSF) of wheat straw using Fourier transform near infrared (FT-NIR) spectroscopy was studied. Synergy interval partial least squares (siPLS) algorithm was implemented to calibrate regression model. The number of PLS factors and the number of subintervals were optimized simultaneously by cross-validation. The performance of the prediction model was evaluated according to the root mean square error of cross-validation (RMSECV), the root mean square error of prediction (RMSEP) and the correlation coefficient (R). The measurement results of the optimal model were obtained as follows: RMSECV=0.0776, R(c)=0.9777, RMSEP=0.0963, and R(p)=0.9686 for pH model; RMSECV=1.3544% w/w, R(c)=0.8871, RMSEP=1.4946% w/w, and R(p)=0.8684 for moisture content model. Finally, compared with classic PLS and iPLS models, the siPLS model revealed its superior performance. The overall results demonstrate that FT-NIR spectroscopy combined with siPLS algorithm can be used to measure process variables in solid-state fermentation of wheat straw, and NIR spectroscopy technique has a potential to be utilized in SSF industry. Copyright © 2012 Elsevier B.V. All rights reserved.
Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor
2011-05-14
In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.
Apparatus and method for controlling autotroph cultivation
Fuxman, Adrian M; Tixier, Sebastien; Stewart, Gregory E; Haran, Frank M; Backstrom, Johan U; Gerbrandt, Kelsey
2013-07-02
A method includes receiving at least one measurement of a dissolved carbon dioxide concentration of a mixture of fluid containing an autotrophic organism. The method also includes determining an adjustment to one or more manipulated variables using the at least one measurement. The method further includes generating one or more signals to modify the one or more manipulated variables based on the determined adjustment. The one or more manipulated variables could include a carbon dioxide flow rate, an air flow rate, a water temperature, and an agitation level for the mixture. At least one model relates the dissolved carbon dioxide concentration to one or more manipulated variables, and the adjustment could be determined by using the at least one model to drive the dissolved carbon dioxide concentration to at least one target that optimize a goal function. The goal function could be to optimize biomass growth rate, nutrient removal and/or lipid production.
Molas, Marek; Lesaffre, Emmanuel
2008-12-30
Discrete bounded outcome scores (BOS), i.e. discrete measurements that are restricted on a finite interval, often occur in practice. Examples are compliance measures, quality of life measures, etc. In this paper we examine three related random effects approaches to analyze longitudinal studies with a BOS as response: (1) a linear mixed effects (LM) model applied to a logistic transformed modified BOS; (2) a model assuming that the discrete BOS is a coarsened version of a latent random variable, which after a logistic-normal transformation, satisfies an LM model; and (3) a random effects probit model. We consider also the extension whereby the variability of the BOS is allowed to depend on covariates. The methods are contrasted using a simulation study and on a longitudinal project, which documents stroke rehabilitation in four European countries using measures of motor and functional recovery. Copyright 2008 John Wiley & Sons, Ltd.
CORRECTING FOR MEASUREMENT ERROR IN LATENT VARIABLES USED AS PREDICTORS*
Schofield, Lynne Steuerle
2015-01-01
This paper represents a methodological-substantive synergy. A new model, the Mixed Effects Structural Equations (MESE) model which combines structural equations modeling and item response theory is introduced to attend to measurement error bias when using several latent variables as predictors in generalized linear models. The paper investigates racial and gender disparities in STEM retention in higher education. Using the MESE model with 1997 National Longitudinal Survey of Youth data, I find prior mathematics proficiency and personality have been previously underestimated in the STEM retention literature. Pre-college mathematics proficiency and personality explain large portions of the racial and gender gaps. The findings have implications for those who design interventions aimed at increasing the rates of STEM persistence among women and under-represented minorities. PMID:26977218
Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory
Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo
2013-01-01
Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Krichevsky, Vladimir; Gebo, Norman
1992-01-01
Five years of rain rate and modeled slant path attenuation distributions at 20 GHz and 30 GHz derived from a network of 10 tipping bucket rain gages was examined. The rain gage network is located within a grid 70 km north-south and 47 km east-west in the Mid-Atlantic coast of the United States in the vicinity of Wallops Island, Virginia. Distributions were derived from the variable integration time data and from one minute averages. It was demonstrated that for realistic fade margins, the variable integration time results are adequate to estimate slant path attenuations at frequencies above 20 GHz using models which require one minute averages. An accurate empirical formula was developed to convert the variable integration time rain rates to one minute averages. Fade distributions at 20 GHz and 30 GHz were derived employing Crane's Global model because it was demonstrated to exhibit excellent accuracy with measured COMSTAR fades at 28.56 GHz.
Empirical algorithms to predict aragonite saturation state
NASA Astrophysics Data System (ADS)
Turk, Daniela; Dowd, Michael
2017-04-01
Novel sensor packages deployed on autonomous platforms (Profiling Floats, Gliders, Moorings, SeaCycler) and biogeochemical models have a potential to increase the coverage of a key water chemistry variable, aragonite saturation state (ΩAr) in time and space, in particular in the under sampled regions of global ocean. However, these do not provide the set of inorganic carbon measurements commonly used to derive ΩAr. There is therefore a need to develop regional predictive models to determine ΩAr from measurements of commonly observed or/and non carbonate oceanic variables. Here, we investigate predictive skill of several commonly observed oceanographic variables (temperature, salinity, oxygen, nitrate, phosphate and silicate) in determining ΩAr using climatology and shipboard data. This will allow us to assess potential for autonomous sensors and biogeochemical models to monitor ΩAr regionally and globally. We apply the regression models to several time series data sets and discuss regional differences and their implications for global estimates of ΩAr.
A Comparison between Multiple Regression Models and CUN-BAE Equation to Predict Body Fat in Adults
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A.; Aguiló, Antoni
2015-01-01
Background Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Methods Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. Results The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). Conclusions There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF. PMID:25821960
A comparison between multiple regression models and CUN-BAE equation to predict body fat in adults.
Fuster-Parra, Pilar; Bennasar-Veny, Miquel; Tauler, Pedro; Yañez, Aina; López-González, Angel A; Aguiló, Antoni
2015-01-01
Because the accurate measure of body fat (BF) is difficult, several prediction equations have been proposed. The aim of this study was to compare different multiple regression models to predict BF, including the recently reported CUN-BAE equation. Multi regression models using body mass index (BMI) and body adiposity index (BAI) as predictors of BF will be compared. These models will be also compared with the CUN-BAE equation. For all the analysis a sample including all the participants and another one including only the overweight and obese subjects will be considered. The BF reference measure was made using Bioelectrical Impedance Analysis. The simplest models including only BMI or BAI as independent variables showed that BAI is a better predictor of BF. However, adding the variable sex to both models made BMI a better predictor than the BAI. For both the whole group of participants and the group of overweight and obese participants, using simple models (BMI, age and sex as variables) allowed obtaining similar correlations with BF as when the more complex CUN-BAE was used (ρ = 0:87 vs. ρ = 0:86 for the whole sample and ρ = 0:88 vs. ρ = 0:89 for overweight and obese subjects, being the second value the one for CUN-BAE). There are simpler models than CUN-BAE equation that fits BF as well as CUN-BAE does. Therefore, it could be considered that CUN-BAE overfits. Using a simple linear regression model, the BAI, as the only variable, predicts BF better than BMI. However, when the sex variable is introduced, BMI becomes the indicator of choice to predict BF.
Evaluation of response variables in computer-simulated virtual cataract surgery
NASA Astrophysics Data System (ADS)
Söderberg, Per G.; Laurell, Carl-Gustaf; Simawi, Wamidh; Nordqvist, Per; Skarman, Eva; Nordh, Leif
2006-02-01
We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at evaluating the precision in the estimation of response variables identified for measurement of the performance of VR phaco surgery. We identified 31 response variables measuring; the overall procedure, the foot pedal technique, the phacoemulsification technique, erroneous manipulation, and damage to ocular structures. Totally, 8 medical or optometry students with a good knowledge of ocular anatomy and physiology but naive to cataract surgery performed three sessions each of VR Phaco surgery. For measurement, the surgical procedure was divided into a sculpting phase and an evacuation phase. The 31 response variables were measured for each phase in all three sessions. The variance components for individuals and iterations of sessions within individuals were estimated with an analysis of variance assuming a hierarchal model. The consequences of estimated variabilities for sample size requirements were determined. It was found that generally there was more variability for iterated sessions within individuals for measurements of the sculpting phase than for measurements of the evacuation phase. This resulted in larger required sample sizes for detection of difference between independent groups or change within group, for the sculpting phase as compared to for the evacuation phase. It is concluded that several of the identified response variables can be measured with sufficient precision for evaluation of VR phaco surgery.
The effect of particle size distribution on the design of urban stormwater control measures
Selbig, William R.; Fienen, Michael N.; Horwatich, Judy A.; Bannerman, Roger T.
2016-01-01
An urban pollutant loading model was used to demonstrate how incorrect assumptions on the particle size distribution (PSD) in urban runoff can alter the design characteristics of stormwater control measures (SCMs) used to remove solids in stormwater. Field-measured PSD, although highly variable, is generally coarser than the widely-accepted PSD characterized by the Nationwide Urban Runoff Program (NURP). PSDs can be predicted based on environmental surrogate data. There were no appreciable differences in predicted PSD when grouped by season. Model simulations of a wet detention pond and catch basin showed a much smaller surface area is needed to achieve the same level of solids removal using the median value of field-measured PSD as compared to NURP PSD. Therefore, SCMs that used the NURP PSD in the design process could be unnecessarily oversized. The median of measured PSDs, although more site-specific than NURP PSDs, could still misrepresent the efficiency of an SCM because it may not adequately capture the variability of individual runoff events. Future pollutant loading models may account for this variability through regression with environmental surrogates, but until then, without proper site characterization, the adoption of a single PSD to represent all runoff conditions may result in SCMs that are under- or over-sized, rendering them ineffective or unnecessarily costly.
ERIC Educational Resources Information Center
Ruban, Lilia; McCoach, D. Betsy; Reis, Sally M.
The aim of this study was to report the development and testing of a model explaining gender differences in the interrelationships among aptitude measures, high school mathematics and science preparation, college academic level, motivational and self-regulatory variables, and grade point average using structural equation modeling and multiple…
ERIC Educational Resources Information Center
Rupp, Andre A.
2012-01-01
In the focus article of this issue, von Davier, Naemi, and Roberts essentially coupled: (1) a short methodological review of structural similarities of latent variable models with discrete and continuous latent variables; and (2) 2 short empirical case studies that show how these models can be applied to real, rather than simulated, large-scale…
Inventory implications of using sampling variances in estimation of growth model coefficients
Albert R. Stage; William R. Wykoff
2000-01-01
Variables based on stand densities or stocking have sampling errors that depend on the relation of tree size to plot size and on the spatial structure of the population, ignoring the sampling errors of such variables, which include most measures of competition used in both distance-dependent and distance-independent growth models, can bias the predictions obtained from...
ERIC Educational Resources Information Center
Svanum, Soren; Bringle, Robert G.
1980-01-01
The confluence model of cognitive development was tested on 7,060 children. Family size, sibling order within family sizes, and hypothesized age-dependent effects were tested. Findings indicated an inverse relationship between family size and the cognitive measures; age-dependent effects and other confluence variables were found to be…
Project Rulison gas flow analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montan, D.N.
1971-01-01
An analysis of the well performance was attempted by fitting a simple model of the chimney, gas sands, and explosively created fracturing to the 2 experimentally measured variables, flow rate, and chimney pressure. The gas-flow calculations for various trial models were done by a finite difference solution to the nonlinear partial differential equation for radial Darcy flow. The TRUMP computer program was used to perform the numerical calculations. In principle, either the flow rate or the chimney pressure could be used as the independent variable in the calculations. In the present case, the flow rate was used as the independentmore » variable, since chimney pressure measurements were not made until after the second flow period in early Nov. 1970. Furthermore, the formation pressure was not accurately known and, hence, was considered a variable parameter in the modeling process. The chimney pressure was assumed equal to the formation pressure at the beginning of the flow testing. The model consisted of a central zone, representing the chimney, surrounded by a number of concentric zones, representing the formation. The effect of explosive fracturing was simulated by increasing the permeability in the zones near the central zone.« less
A model for helicopter guidance on spiral trajectories
NASA Technical Reports Server (NTRS)
Mendenhall, S.; Slater, G. L.
1980-01-01
A point mass model is developed for helicopter guidance on spiral trajectories. A fully coupled set of state equations is developed and perturbation equations suitable for 3-D and 4-D guidance are derived and shown to be amenable to conventional state variable feedback methods. Control variables are chosen to be the magnitude and orientation of the net rotor thrust. Using these variables reference controls for nonlevel accelerating trajectories are easily determined. The effects of constant wind are shown to require significant feedforward correction to some of the reference controls and to the time. Although not easily measured themselves, the controls variables chosen are shown to be easily related to the physical variables available in the cockpit.
On the use of internal state variables in thermoviscoplastic constitutive equations
NASA Technical Reports Server (NTRS)
Allen, D. H.; Beek, J. M.
1985-01-01
The general theory of internal state variables are reviewed to apply it to inelastic metals in use in high temperature environments. In this process, certain constraints and clarifications will be made regarding internal state variables. It is shown that the Helmholtz free energy can be utilized to construct constitutive equations which are appropriate for metallic superalloys. Internal state variables are shown to represent locally averaged measures of dislocation arrangement, dislocation density, and intergranular fracture. The internal state variable model is demonstrated to be a suitable framework for comparison of several currently proposed models for metals and can therefore be used to exhibit history dependence, nonlinearity, and rate as well as temperature sensitivity.
Three Cs in Measurement Models: Causal Indicators, Composite Indicators, and Covariates
Bollen, Kenneth A.; Bauldry, Shawn
2013-01-01
In the last two decades attention to causal (and formative) indicators has grown. Accompanying this growth has been the belief that we can classify indicators into two categories, effect (reflective) indicators and causal (formative) indicators. This paper argues that the dichotomous view is too simple. Instead, there are effect indicators and three types of variables on which a latent variable depends: causal indicators, composite (formative) indicators, and covariates (the “three Cs”). Causal indicators have conceptual unity and their effects on latent variables are structural. Covariates are not concept measures, but are variables to control to avoid bias in estimating the relations between measures and latent variable(s). Composite (formative) indicators form exact linear combinations of variables that need not share a concept. Their coefficients are weights rather than structural effects and composites are a matter of convenience. The failure to distinguish the “three Cs” has led to confusion and questions such as: are causal and formative indicators different names for the same indicator type? Should an equation with causal or formative indicators have an error term? Are the coefficients of causal indicators less stable than effect indicators? Distinguishing between causal and composite indicators and covariates goes a long way toward eliminating this confusion. We emphasize the key role that subject matter expertise plays in making these distinctions. We provide new guidelines for working with these variable types, including identification of models, scaling latent variables, parameter estimation, and validity assessment. A running empirical example on self-perceived health illustrates our major points. PMID:21767021
p-adic stochastic hidden variable model
NASA Astrophysics Data System (ADS)
Khrennikov, Andrew
1998-03-01
We propose stochastic hidden variables model in which hidden variables have a p-adic probability distribution ρ(λ) and at the same time conditional probabilistic distributions P(U,λ), U=A,A',B,B', are ordinary probabilities defined on the basis of the Kolmogorov measure-theoretical axiomatics. A frequency definition of p-adic probability is quite similar to the ordinary frequency definition of probability. p-adic frequency probability is defined as the limit of relative frequencies νn but in the p-adic metric. We study a model with p-adic stochastics on the level of the hidden variables description. But, of course, responses of macroapparatuses have to be described by ordinary stochastics. Thus our model describes a mixture of p-adic stochastics of the microworld and ordinary stochastics of macroapparatuses. In this model probabilities for physical observables are the ordinary probabilities. At the same time Bell's inequality is violated.
Facchinello, Yann; Brailovski, Vladimir; Petit, Yvan; Mac-Thiong, Jean-Marc
2014-11-01
The concept of a monolithic Ti-Ni spinal rod with variable flexural stiffness is proposed to reduce the risks associated with spinal fusion. The variable stiffness is conferred to the rod using the Joule-heating local annealing technique. The annealing temperature and the mechanical properties' distributions resulted from this thermal treatment are numerically modeled and experimentally measured. To illustrate the possible applications of such a modeling approach, two case studies are presented: (a) optimization of the Joule-heating strategy to reduce annealing time, and (b) modulation of the rod's overall flexural stiffness using partial annealing. A numerical model of a human spine coupled with the model of the variable flexural stiffness spinal rod developed in this work can ultimately be used to maximize the stabilization capability of spinal instrumentation, while simultaneously decreasing the risks associated with spinal fusion. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Lehrer, Richard; Kim, Min-joung; Schauble, Leona
2007-01-01
New capabilities in "TinkerPlots 2.0" supported the conceptual development of fifth- and sixth-grade students as they pursued several weeks of instruction that emphasized data modeling. The instruction highlighted links between data analysis, chance, and modeling in the context of describing and explaining the distributions of measures that result…
Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer
2014-01-01
National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...
Brunwasser, Steven M; Gebretsadik, Tebeb; Gold, Diane R; Turi, Kedir N; Stone, Cosby A; Datta, Soma; Gern, James E; Hartert, Tina V
2018-01-01
The International Study of Asthma and Allergies in Children (ISAAC) Wheezing Module is commonly used to characterize pediatric asthma in epidemiological studies, including nearly all airway cohorts participating in the Environmental Influences on Child Health Outcomes (ECHO) consortium. However, there is no consensus model for operationalizing wheezing severity with this instrument in explanatory research studies. Severity is typically measured using coarsely-defined categorical variables, reducing power and potentially underestimating etiological associations. More precise measurement approaches could improve testing of etiological theories of wheezing illness. We evaluated a continuous latent variable model of pediatric wheezing severity based on four ISAAC Wheezing Module items. Analyses included subgroups of children from three independent cohorts whose parents reported past wheezing: infants ages 0-2 in the INSPIRE birth cohort study (Cohort 1; n = 657), 6-7-year-old North American children from Phase One of the ISAAC study (Cohort 2; n = 2,765), and 5-6-year-old children in the EHAAS birth cohort study (Cohort 3; n = 102). Models were estimated using structural equation modeling. In all cohorts, covariance patterns implied by the latent variable model were consistent with the observed data, as indicated by non-significant χ2 goodness of fit tests (no evidence of model misspecification). Cohort 1 analyses showed that the latent factor structure was stable across time points and child sexes. In both cohorts 1 and 3, the latent wheezing severity variable was prospectively associated with wheeze-related clinical outcomes, including physician asthma diagnosis, acute corticosteroid use, and wheeze-related outpatient medical visits when adjusting for confounders. We developed an easily applicable continuous latent variable model of pediatric wheezing severity based on items from the well-validated ISAAC Wheezing Module. This model prospectively associates with asthma morbidity, as demonstrated in two ECHO birth cohort studies, and provides a more statistically powerful method of testing etiologic hypotheses of childhood wheezing illness and asthma.
Climatic extremes improve predictions of spatial patterns of tree species
Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.
2009-01-01
Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.
When can ocean acidification impacts be detected from decadal alkalinity measurements?
NASA Astrophysics Data System (ADS)
Carter, B. R.; Frölicher, T. L.; Dunne, J. P.; Rodgers, K. B.; Slater, R. D.; Sarmiento, J. L.
2016-04-01
We use a large initial condition suite of simulations (30 runs) with an Earth system model to assess the detectability of biogeochemical impacts of ocean acidification (OA) on the marine alkalinity distribution from decadally repeated hydrographic measurements such as those produced by the Global Ship-Based Hydrographic Investigations Program (GO-SHIP). Detection of these impacts is complicated by alkalinity changes from variability and long-term trends in freshwater and organic matter cycling and ocean circulation. In our ensemble simulation, variability in freshwater cycling generates large changes in alkalinity that obscure the changes of interest and prevent the attribution of observed alkalinity redistribution to OA. These complications from freshwater cycling can be mostly avoided through salinity normalization of alkalinity. With the salinity-normalized alkalinity, modeled OA impacts are broadly detectable in the surface of the subtropical gyres by 2030. Discrepancies between this finding and the finding of an earlier analysis suggest that these estimates are strongly sensitive to the patterns of calcium carbonate export simulated by the model. OA impacts are detectable later in the subpolar and equatorial regions due to slower responses of alkalinity to OA in these regions and greater seasonal equatorial alkalinity variability. OA impacts are detectable later at depth despite lower variability due to smaller rates of change and consistent measurement uncertainty.
Midlatitude atmospheric OH response to the most recent 11-y solar cycle.
Wang, Shuhui; Li, King-Fai; Pongetti, Thomas J; Sander, Stanley P; Yung, Yuk L; Liang, Mao-Chang; Livesey, Nathaniel J; Santee, Michelle L; Harder, Jerald W; Snow, Martin; Mills, Franklin P
2013-02-05
The hydroxyl radical (OH) plays an important role in middle atmospheric photochemistry, particularly in ozone (O(3)) chemistry. Because it is mainly produced through photolysis and has a short chemical lifetime, OH is expected to show rapid responses to solar forcing [e.g., the 11-y solar cycle (SC)], resulting in variabilities in related middle atmospheric O(3) chemistry. Here, we present an effort to investigate such OH variability using long-term observations (from space and the surface) and model simulations. Ground-based measurements and data from the Microwave Limb Sounder on the National Aeronautics and Space Administration's Aura satellite suggest an ∼7-10% decrease in OH column abundance from solar maximum to solar minimum that is highly correlated with changes in total solar irradiance, solar Mg-II index, and Lyman-α index during SC 23. However, model simulations using a commonly accepted solar UV variability parameterization give much smaller OH variability (∼3%). Although this discrepancy could result partially from the limitations in our current understanding of middle atmospheric chemistry, recently published solar spectral irradiance data from the Solar Radiation and Climate Experiment suggest a solar UV variability that is much larger than previously believed. With a solar forcing derived from the Solar Radiation and Climate Experiment data, modeled OH variability (∼6-7%) agrees much better with observations. Model simulations reveal the detailed chemical mechanisms, suggesting that such OH variability and the corresponding catalytic chemistry may dominate the O(3) SC signal in the upper stratosphere. Continuing measurements through SC 24 are required to understand this OH variability and its impacts on O(3) further.
Midlatitude atmospheric OH response to the most recent 11-y solar cycle
Wang, Shuhui; Li, King-Fai; Pongetti, Thomas J.; Sander, Stanley P.; Yung, Yuk L.; Liang, Mao-Chang; Livesey, Nathaniel J.; Santee, Michelle L.; Harder, Jerald W.; Snow, Martin; Mills, Franklin P.
2013-01-01
The hydroxyl radical (OH) plays an important role in middle atmospheric photochemistry, particularly in ozone (O3) chemistry. Because it is mainly produced through photolysis and has a short chemical lifetime, OH is expected to show rapid responses to solar forcing [e.g., the 11-y solar cycle (SC)], resulting in variabilities in related middle atmospheric O3 chemistry. Here, we present an effort to investigate such OH variability using long-term observations (from space and the surface) and model simulations. Ground-based measurements and data from the Microwave Limb Sounder on the National Aeronautics and Space Administration’s Aura satellite suggest an ∼7–10% decrease in OH column abundance from solar maximum to solar minimum that is highly correlated with changes in total solar irradiance, solar Mg-II index, and Lyman-α index during SC 23. However, model simulations using a commonly accepted solar UV variability parameterization give much smaller OH variability (∼3%). Although this discrepancy could result partially from the limitations in our current understanding of middle atmospheric chemistry, recently published solar spectral irradiance data from the Solar Radiation and Climate Experiment suggest a solar UV variability that is much larger than previously believed. With a solar forcing derived from the Solar Radiation and Climate Experiment data, modeled OH variability (∼6–7%) agrees much better with observations. Model simulations reveal the detailed chemical mechanisms, suggesting that such OH variability and the corresponding catalytic chemistry may dominate the O3 SC signal in the upper stratosphere. Continuing measurements through SC 24 are required to understand this OH variability and its impacts on O3 further. PMID:23341617
Tidholm, A; Höglund, K; Häggström, J; Ljungvall, I
2015-01-01
Pulmonary hypertension (PH) is commonly associated with myxomatous mitral valve disease (MMVD). Because dogs with PH present without measureable tricuspid regurgitation (TR), it would be useful to investigate echocardiographic variables that can identify PH. To investigate associations between estimated systolic TR pressure gradient (TRPG) and dog characteristics and selected echocardiographic variables. 156 privately owned dogs. Prospective observational study comparing the estimations of TRPG with dog characteristics and selected echocardiographic variables in dogs with MMVD and measureable TR. Tricuspid regurgitation pressure gradient was significantly (P < .05) associated with body weight corrected right (RVIDDn) and left (LVIDDn) ventricular end-diastolic and systolic (LVIDSn) internal diameters, pulmonary arterial (PA) acceleration to deceleration time ratio (AT/DT), heart rate, left atrial to aortic root ratio (LA/Ao), and the presence of congestive heart failure. Four variables remained significant in the multiple regression analysis with TRPG as a dependent variable: modeled as linear variables LA/Ao (P < .0001) and RVIDDn (P = .041), modeled as second order polynomial variables: AT/DT (P = .0039) and LVIDDn (P < .0001) The adjusted R(2) -value for the final model was 0.45 and receiver operating characteristic curve analysis suggested the model's performance to predict PH, defined as 36, 45, and 55 mmHg as fair (area under the curve [AUC] = 0.80), good (AUC = 0.86), and excellent (AUC = 0.92), respectively. In dogs with MMVD, the presence of PH might be suspected with the combination of decreased PA AT/DT, increased RVIDDn and LA/Ao, and a small or great LVIDDn. Copyright © 2015 The Authors Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Sea Surface Salinity Variability from Simulations and Observations: Preparing for Aquarius
NASA Technical Reports Server (NTRS)
Jacob, S. Daniel; LeVine, David M.
2010-01-01
Oceanic fresh water transport has been shown to play an important role in the global hydrological cycle. Sea surface salinity (SSS) is representative of the surface fresh water fluxes and the upcoming Aquarius mission scheduled to be launched in December 2010 will provide excellent spatial and temporal SSS coverage to better estimate the net exchange. In most ocean general circulation models, SSS is relaxed to climatology to prevent model drift. While SST remains a well observed variable, relaxing to SST reduces the range of SSS variability in the simulations (Fig.1). The main objective of the present study is to simulate surface tracers using a primitive equation ocean model for multiple forcing data sets to identify and establish a baseline SSS variability. The simulated variability scales are compared to those from near-surface argo salinity measurements.
Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS
Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2016-01-01
We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332
Variable selection for marginal longitudinal generalized linear models.
Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio
2005-06-01
Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd
Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less
NASA Astrophysics Data System (ADS)
Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam
2015-10-01
Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
Repeatability of a 3D multi-segment foot model protocol in presence of foot deformities.
Deschamps, Kevin; Staes, Filip; Bruyninckx, Herman; Busschots, Ellen; Matricali, Giovanni A; Spaepen, Pieter; Meyer, Christophe; Desloovere, Kaat
2012-07-01
Repeatability studies on 3D multi-segment foot models (3DMFMs) have mainly considered healthy participants which contrasts with the widespread application of these models to evaluate foot pathologies. The current study aimed at establishing the repeatability of the 3DMFM described by Leardini et al. in presence of foot deformities. Foot kinematics of eight adult participants were analyzed using a repeated-measures design including two therapists with different levels of experience. The inter-trial variability was higher compared to the kinematics of healthy subjects. Consideration of relative angles resulted in the lowest inter-session variability. The absolute 3D rotations between the Sha-Cal and Cal-Met seem to have the lowest variability in both therapists. A general trend towards higher σ(sess)/σ(trial) ratios was observed when the midfoot was involved. The current study indicates that not only relative 3D rotations and planar angles can be measured consistently in patients, also a number of absolute parameters can be consistently measured serving as basis for the decision making process. Copyright © 2012 Elsevier B.V. All rights reserved.
Observability and synchronization of neuron models.
Aguirre, Luis A; Portes, Leonardo L; Letellier, Christophe
2017-10-01
Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.
Applying the Expectancy-Value Model to understand health values.
Zhang, Xu-Hao; Xie, Feng; Wee, Hwee-Lin; Thumboo, Julian; Li, Shu-Chuen
2008-03-01
Expectancy-Value Model (EVM) is the most structured model in psychology to predict attitudes by measuring attitudinal attributes (AAs) and relevant external variables. Because health value could be categorized as attitude, we aimed to apply EVM to explore its usefulness in explaining variances in health values and investigate underlying factors. Focus group discussion was carried out to identify the most common and significant AAs toward 5 different health states (coded as 11111, 11121, 21221, 32323, and 33333 in EuroQol Five-Dimension (EQ-5D) descriptive system). AAs were measured in a sum of multiplications of subjective probability (expectancy) and perceived value of attributes with 7-point Likert scales. Health values were measured using visual analog scales (VAS, range 0-1). External variables (age, sex, ethnicity, education, housing, marital status, and concurrent chronic diseases) were also incorporated into survey questionnaire distributed by convenience sampling among eligible respondents. Univariate analyses were used to identify external variables causing significant differences in VAS. Multiple linear regression model (MLR) and hierarchical regression model were used to investigate the explanatory power of AAs and possible significant external variable(s) separately or in combination, for each individual health state and a mixed scenario of five states, respectively. Four AAs were identified, namely, "worsening your quality of life in terms of health" (WQoL), "adding a burden to your family" (BTF), "making you less independent" (MLI) and "unable to work or study" (UWS). Data were analyzed based on 232 respondents (mean [SD] age: 27.7 [15.07] years, 49.1% female). Health values varied significantly across 5 health states, ranging from 0.12 (33333) to 0.97 (11111). With no significant external variables identified, EVM explained up to 62% of the variances in health values across 5 health states. The explanatory power of 4 AAs were found to be between 13% and 28% in separate MLR models (P < 0.05). When data were analyzed for each health state, variances in health values became small and explanatory power of EVM was reduced to a range between 8% and 23%. EVM was useful in explaining variances of health values and predicting important factors. Its power to explain small variances might be restricted due to limitations of 7-point Likert scale to measure AAs accurately. With further improvement and validation of a compatible continuous scale for more accurate measurement, EVM is expected to explain health values to a larger extent.
Ruediger, T M; Allison, S C; Moore, J M; Wainner, R S
2014-09-01
The purposes of this descriptive and exploratory study were to examine electrophysiological measures of ulnar sensory nerve function in disease free adults to determine reliability, determine reference values computed with appropriate statistical methods, and examine predictive ability of anthropometric variables. Antidromic sensory nerve conduction studies of the ulnar nerve using surface electrodes were performed on 100 volunteers. Reference values were computed from optimally transformed data. Reliability was computed from 30 subjects. Multiple linear regression models were constructed from four predictor variables. Reliability was greater than 0.85 for all paired measures. Responses were elicited in all subjects; reference values for sensory nerve action potential (SNAP) amplitude from above elbow stimulation are 3.3 μV and decrement across-elbow less than 46%. No single predictor variable accounted for more than 15% of the variance in the response. Electrophysiologic measures of the ulnar sensory nerve are reliable. Absent SNAP responses are inconsistent with disease free individuals. Reference values recommended in this report are based on appropriate transformations of non-normally distributed data. No strong statistical model of prediction could be derived from the limited set of predictor variables. Reliability analyses combined with relatively low level of measurement error suggest that ulnar sensory reference values may be used with confidence. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Two-Part and Related Regression Models for Longitudinal Data
Farewell, V.T.; Long, D.L.; Tom, B.D.M.; Yiu, S.; Su, L.
2017-01-01
Statistical models that involve a two-part mixture distribution are applicable in a variety of situations. Frequently, the two parts are a model for the binary response variable and a model for the outcome variable that is conditioned on the binary response. Two common examples are zero-inflated or hurdle models for count data and two-part models for semicontinuous data. Recently, there has been particular interest in the use of these models for the analysis of repeated measures of an outcome variable over time. The aim of this review is to consider motivations for the use of such models in this context and to highlight the central issues that arise with their use. We examine two-part models for semicontinuous and zero-heavy count data, and we also consider models for count data with a two-part random effects distribution. PMID:28890906
Population activity statistics dissect subthreshold and spiking variability in V1.
Bányai, Mihály; Koman, Zsombor; Orbán, Gergő
2017-07-01
Response variability, as measured by fluctuating responses upon repeated performance of trials, is a major component of neural responses, and its characterization is key to interpret high dimensional population recordings. Response variability and covariability display predictable changes upon changes in stimulus and cognitive or behavioral state, providing an opportunity to test the predictive power of models of neural variability. Still, there is little agreement on which model to use as a building block for population-level analyses, and models of variability are often treated as a subject of choice. We investigate two competing models, the doubly stochastic Poisson (DSP) model assuming stochasticity at spike generation, and the rectified Gaussian (RG) model tracing variability back to membrane potential variance, to analyze stimulus-dependent modulation of both single-neuron and pairwise response statistics. Using a pair of model neurons, we demonstrate that the two models predict similar single-cell statistics. However, DSP and RG models have contradicting predictions on the joint statistics of spiking responses. To test the models against data, we build a population model to simulate stimulus change-related modulations in pairwise response statistics. We use single-unit data from the primary visual cortex (V1) of monkeys to show that while model predictions for variance are qualitatively similar to experimental data, only the RG model's predictions are compatible with joint statistics. These results suggest that models using Poisson-like variability might fail to capture important properties of response statistics. We argue that membrane potential-level modeling of stochasticity provides an efficient strategy to model correlations. NEW & NOTEWORTHY Neural variability and covariability are puzzling aspects of cortical computations. For efficient decoding and prediction, models of information encoding in neural populations hinge on an appropriate model of variability. Our work shows that stimulus-dependent changes in pairwise but not in single-cell statistics can differentiate between two widely used models of neuronal variability. Contrasting model predictions with neuronal data provides hints on the noise sources in spiking and provides constraints on statistical models of population activity. Copyright © 2017 the American Physiological Society.
Application of latent variable model in Rosenberg self-esteem scale.
Leung, Shing-On; Wu, Hui-Ping
2013-01-01
Latent Variable Models (LVM) are applied to Rosenberg Self-Esteem Scale (RSES). Parameter estimations automatically give negative signs hence no recoding is necessary for negatively scored items. Bad items can be located through parameter estimate, item characteristic curves and other measures. Two factors are extracted with one on self-esteem and the other on the degree to take moderate views, with the later not often being covered in previous studies. A goodness-of-fit measure based on two-way margins is used but more works are needed. Results show that scaling provided by models with more formal statistical ground correlated highly with conventional method, which may provide justification for usual practice.
NASA Technical Reports Server (NTRS)
Hayden, R. E.; Kadman, Y.; Chanaud, R. C.
1972-01-01
The feasibility of quieting the externally-blown-flap (EBF) noise sources which are due to interaction of jet exhaust flow with deployed flaps was demonstrated on a 1/15-scale 3-flap EBF model. Sound field characteristics were measured and noise reduction fundamentals were reviewed in terms of source models. Test of the 1/15-scale model showed broadband noise reductions of up to 20 dB resulting from combination of variable impedance flap treatment and mesh grids placed in the jet flow upstream of the flaps. Steady-state lift, drag, and pitching moment were measured with and without noise reduction treatment.
Extending the Confrontation of Weather and Climate Models from Soil Moisture to Surface Flux Data
NASA Astrophysics Data System (ADS)
Dirmeyer, P.; Chen, L.; Wu, J.
2016-12-01
The atmosphere and land components of weather and climate models are typically developed separately and coupled as a last step before new model versions are released. Separate testing of land surface models (LSMs) and atmospheric models is often quite extensive in the development phase, but validation of coupled land-atmosphere behavior is often minimal if performed at all. This is partly because of this piecemeal model development approach and partly because the necessary in situ data to confront coupled land-atmosphere models (LAMs) has been meager until quite recently. Over the past 10-20 years there has been a growing number of networks of measurements of land surface states, surface fluxes, radiation and near-surface meteorology, although they have been largely uncoordinated and frequently incomplete across the range of variables necessary to validate LAMs. We extend recent work "confronting" a variety of LSMs and LAMs with in situ observations of soil moisture from cross-standardized networks to comparisons with measurements of surface latent and sensible heat fluxes at FLUXNET sites in a variety of climate regimes around the world. The motivation is to determine how well LSMs represent observed statistics of variability and co-variability, how much models differ from one another, and how those statistics change when the LSMs are coupled to atmospheric models. Furthermore, comparisons are made to several LAMs in both open-loop (free running) and reanalysis configurations. This shows to what extent data assimilation can constrain the processes involved in flux variability, and helps illuminate model development pathways to improve coupled land-atmosphere interactions in weather and climate models.
Error-in-variables models in calibration
NASA Astrophysics Data System (ADS)
Lira, I.; Grientschnig, D.
2017-12-01
In many calibration operations, the stimuli applied to the measuring system or instrument under test are derived from measurement standards whose values may be considered to be perfectly known. In that case, it is assumed that calibration uncertainty arises solely from inexact measurement of the responses, from imperfect control of the calibration process and from the possible inaccuracy of the calibration model. However, the premise that the stimuli are completely known is never strictly fulfilled and in some instances it may be grossly inadequate. Then, error-in-variables (EIV) regression models have to be employed. In metrology, these models have been approached mostly from the frequentist perspective. In contrast, not much guidance is available on their Bayesian analysis. In this paper, we first present a brief summary of the conventional statistical techniques that have been developed to deal with EIV models in calibration. We then proceed to discuss the alternative Bayesian framework under some simplifying assumptions. Through a detailed example about the calibration of an instrument for measuring flow rates, we provide advice on how the user of the calibration function should employ the latter framework for inferring the stimulus acting on the calibrated device when, in use, a certain response is measured.
Variability in the Length and Frequency of Steps of Sighted and Visually Impaired Walkers
ERIC Educational Resources Information Center
Mason, Sarah J.; Legge, Gordon E.; Kallie, Christopher S.
2005-01-01
The variability of the length and frequency of steps was measured in sighted and visually impaired walkers at three different paces. The variability was low, especially at the preferred pace, and similar for both groups. A model incorporating step counts and step frequency provides good estimates of the distance traveled. Applications to…
An improved strategy for regression of biophysical variables and Landsat ETM+ data.
Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; David P. Turner
2003-01-01
Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent woody canopy cover, and leaf area index (LAI). Traditional methods of regression are not...
What Is Going on Inside the Arrows? Discovering the Hidden Springs in Causal Models
Murray-Watters, Alexander; Glymour, Clark
2016-01-01
Using Gebharter's (2014) representation, we consider aspects of the problem of discovering the structure of unmeasured sub-mechanisms when the variables in those sub-mechanisms have not been measured. Exploiting an early insight of Sober's (1998), we provide a correct algorithm for identifying latent, endogenous structure—sub-mechanisms—for a restricted class of structures. The algorithm can be merged with other methods for discovering causal relations among unmeasured variables, and feedback relations between measured variables and unobserved causes can sometimes be learned. PMID:27313331
Soil variability in engineering applications
NASA Astrophysics Data System (ADS)
Vessia, Giovanna
2014-05-01
Natural geomaterials, as soils and rocks, show spatial variability and heterogeneity of physical and mechanical properties. They can be measured by in field and laboratory testing. The heterogeneity concerns different values of litho-technical parameters pertaining similar lithological units placed close to each other. On the contrary, the variability is inherent to the formation and evolution processes experienced by each geological units (homogeneous geomaterials on average) and captured as a spatial structure of fluctuation of physical property values about their mean trend, e.g. the unit weight, the hydraulic permeability, the friction angle, the cohesion, among others. The preceding spatial variations shall be managed by engineering models to accomplish reliable designing of structures and infrastructures. Materon (1962) introduced the Geostatistics as the most comprehensive tool to manage spatial correlation of parameter measures used in a wide range of earth science applications. In the field of the engineering geology, Vanmarcke (1977) developed the first pioneering attempts to describe and manage the inherent variability in geomaterials although Terzaghi (1943) already highlighted that spatial fluctuations of physical and mechanical parameters used in geotechnical designing cannot be neglected. A few years later, Mandelbrot (1983) and Turcotte (1986) interpreted the internal arrangement of geomaterial according to Fractal Theory. In the same years, Vanmarcke (1983) proposed the Random Field Theory providing mathematical tools to deal with inherent variability of each geological units or stratigraphic succession that can be resembled as one material. In this approach, measurement fluctuations of physical parameters are interpreted through the spatial variability structure consisting in the correlation function and the scale of fluctuation. Fenton and Griffiths (1992) combined random field simulation with the finite element method to produce the Random Finite Element Method (RFEM). This method has been used to investigate the random behavior of soils in the context of a variety of classical geotechnical problems. Afterward, some following studies collected the worldwide variability values of many technical parameters of soils (Phoon and Kulhawy 1999a) and their spatial correlation functions (Phoon and Kulhawy 1999b). In Italy, Cherubini et al. (2007) calculated the spatial variability structure of sandy and clayey soils from the standard cone penetration test readings. The large extent of the worldwide measured spatial variability of soils and rocks heavily affects the reliability of geotechnical designing as well as other uncertainties introduced by testing devices and engineering models. So far, several methods have been provided to deal with the preceding sources of uncertainties in engineering designing models (e.g. First Order Reliability Method, Second Order Reliability Method, Response Surface Method, High Dimensional Model Representation, etc.). Nowadays, the efforts in this field have been focusing on (1) measuring spatial variability of different rocks and soils and (2) developing numerical models that take into account the spatial variability as additional physical variable. References Cherubini C., Vessia G. and Pula W. 2007. Statistical soil characterization of Italian sites for reliability analyses. Proc. 2nd Int. Workshop. on Characterization and Engineering Properties of Natural Soils, 3-4: 2681-2706. Griffiths D.V. and Fenton G.A. 1993. Seepage beneath water retaining structures founded on spatially random soil, Géotechnique, 43(6): 577-587. Mandelbrot B.B. 1983. The Fractal Geometry of Nature. San Francisco: W H Freeman. Matheron G. 1962. Traité de Géostatistique appliquée. Tome 1, Editions Technip, Paris, 334 p. Phoon K.K. and Kulhawy F.H. 1999a. Characterization of geotechnical variability. Can Geotech J, 36(4): 612-624. Phoon K.K. and Kulhawy F.H. 1999b. Evaluation of geotechnical property variability. Can Geotech J, 36(4): 625-639. Terzaghi K. 1943. Theoretical Soil Mechanics. New York: John Wiley and Sons. Turcotte D.L. 1986. Fractals and fragmentation. J Geophys Res, 91: 1921-1926. Vanmarcke E.H. 1977. Probabilistic modeling of soil profiles. J Geotech Eng Div, ASCE, 103: 1227-1246. Vanmarcke E.H. 1983. Random fields: analysis and synthesis. MIT Press, Cambridge.
Detection of carbon monoxide trends in the presence of interannual variability
NASA Astrophysics Data System (ADS)
Strode, Sarah A.; Pawson, Steven
2013-11-01
in fossil fuel emissions are a major driver of changes in atmospheric CO, but detection of trends in CO from anthropogenic sources is complicated by the presence of large interannual variability (IAV) in biomass burning. We use a multiyear model simulation of CO with year-specific biomass burning to predict the number of years needed to detect the impact of changes in Asian anthropogenic emissions on downwind regions. Our study includes two cases for changing anthropogenic emissions: a stepwise change of 15% and a linear trend of 3% yr-1. We first examine how well the model reproduces the observed IAV of CO over the North Pacific, since this variability impacts the time needed to detect significant anthropogenic trends. The modeled IAV over the North Pacific correlates well with that seen from the Measurements of Pollution in the Troposphere (MOPITT) instrument but underestimates the magnitude of the variability. The model predicts that a 3% yr-1 trend in Asian anthropogenic emissions would lead to a statistically significant trend in CO surface concentration in the western United States within 12 years, and accounting for Siberian boreal biomass-burning emissions greatly reduces the number of years needed for trend detection. Combining the modeled trend with the observed MOPITT variability at 500 hPa, we estimate that the 3% yr-1 trend could be detectable in satellite observations over Asia in approximately a decade. Our predicted timescales for trend detection highlight the importance of long-term measurements of CO from satellites.
Velez, Brandon L; Moradi, Bonnie
2012-07-01
The present study explored the links of 2 workplace contextual variables--perceptions of workplace heterosexist discrimination and lesbian, gay, and bisexual (LGB)-supportive climates--with job satisfaction and turnover intentions in a sample of LGB employees. An extension of the theory of work adjustment (TWA) was used as the conceptual framework for the study; as such, perceived person-organization (P-O) fit was tested as a mediator of the relations between the workplace contextual variables and job outcomes. Data were analyzed from 326 LGB employees. Zero-order correlations indicated that perceptions of workplace heterosexist discrimination and LGB-supportive climates were correlated in expected directions with P-O fit, job satisfaction, and turnover intentions. Structural equation modeling (SEM) was used to compare multiple alternative measurement models evaluating the discriminant validity of the 2 workplace contextual variables relative to one another, and the 3 TWA job variables relative to one another; SEM was also used to test the hypothesized mediation model. Comparisons of multiple alternative measurement models supported the construct distinctiveness of the variables of interest. The test of the hypothesized structural model revealed that only LGB-supportive climates (and not workplace heterosexist discrimination) had a unique direct positive link with P-O fit and, through the mediating role of P-O fit, had significant indirect positive and negative relations with job satisfaction and turnover intentions, respectively. Moreover, P-O fit had a significant indirect negative link with turnover intentions through job satisfaction.
NASA Astrophysics Data System (ADS)
Mejnertsen, L.; Eastwood, J. P.; Hietala, H.; Schwartz, S. J.; Chittenden, J. P.
2018-01-01
Empirical models of the Earth's bow shock are often used to place in situ measurements in context and to understand the global behavior of the foreshock/bow shock system. They are derived statistically from spacecraft bow shock crossings and typically treat the shock surface as a conic section parameterized according to a uniform solar wind ram pressure, although more complex models exist. Here a global magnetohydrodynamic simulation is used to analyze the variability of the Earth's bow shock under real solar wind conditions. The shape and location of the bow shock is found as a function of time, and this is used to calculate the shock velocity over the shock surface. The results are compared to existing empirical models. Good agreement is found in the variability of the subsolar shock location. However, empirical models fail to reproduce the two-dimensional shape of the shock in the simulation. This is because significant solar wind variability occurs on timescales less than the transit time of a single solar wind phase front over the curved shock surface. Empirical models must therefore be used with care when interpreting spacecraft data, especially when observations are made far from the Sun-Earth line. Further analysis reveals a bias to higher shock speeds when measured by virtual spacecraft. This is attributed to the fact that the spacecraft only observes the shock when it is in motion. This must be accounted for when studying bow shock motion and variability with spacecraft data.
Alarcón, Graciela S; McGwin, Gerald; Sanchez, Martha L; Bastian, Holly M; Fessler, Barri J; Friedman, Alan W; Baethge, Bruce A; Roseman, Jeffrey; Reveille, John D
2004-02-15
To determine the impact of wealth on disease activity in the multiethnic (Hispanic, African American, and Caucasian) LUMINA (Lupus in Minorities, Nature versus nurture) cohort of patients with systemic lupus erythematosus (SLE) and disease duration < or =5 years at enrollment. Variables (socioeconomic, demographic, clinical, immunologic, immunogenetic, behavioral, and psychological) were measured at enrollment and annually thereafter. Four questions from the Women's Health Initiative study were used to measure wealth. Disease activity was measured with the Systemic Lupus Activity Measure (SLAM). The relationship between the different variables and wealth was then examined. Next, the impact of wealth on disease activity was examined in regression models where the dependent variables were the SLAM score and SLAM global (physician). Variables previously found to impact disease activity plus the wealth questions were included in the models. Questions on income, assets, and debt were found to distinguish patients into groups, wealthier and less wealthy. Less wealthy patients tended to be younger, women, noncaucasian, less educated, unmarried, less likely to have health insurance, and more likely to live below the poverty line. They also tended to have more active disease, more abnormal illness-related behaviors, less social support, and lower levels of self reported mental functioning. None of the wealth questions was retained in the regression models, although other socioeconomic features (such as African American ethnicity, poverty, and younger age) did. Wealth, per se, does not appear to have an additional predictive value, over and above traditional measures of socioeconomic status, in SLE disease activity.
Evaluation of anti-migration properties of biliary covered self-expandable metal stents.
Minaga, Kosuke; Kitano, Masayuki; Imai, Hajime; Harwani, Yogesh; Yamao, Kentaro; Kamata, Ken; Miyata, Takeshi; Omoto, Shunsuke; Kadosaka, Kumpei; Sakurai, Toshiharu; Nishida, Naoshi; Kudo, Masatoshi
2016-08-14
To assess anti-migration potential of six biliary covered self-expandable metal stents (C-SEMSs) by using a newly designed phantom model. In the phantom model, the stent was placed in differently sized holes in a silicone wall and retracted with a retraction robot. Resistance force to migration (RFM) was measured by a force gauge on the stent end. Radial force (RF) was measured with a RF measurement machine. Measured flare structure variables were the outer diameter, height, and taper angle of the flare (ODF, HF, and TAF, respectively). Correlations between RFM and RF or flare variables were analyzed using a linear correlated model. Out of the six stents, five stents were braided, the other was laser-cut. The RF and RFM of each stent were expressed as the average of five replicate measurements. For all six stents, RFM and RF decreased as the hole diameter increased. For all six stents, RFM and RF correlated strongly when the stent had not fully expanded. This correlation was not observed in the five braided stents excluding the laser cut stent. For all six stents, there was a strong correlation between RFM and TAF when the stent fully expanded. For the five braided stents, RFM after full stent expansion correlated strongly with all three stent flare structure variables (ODF, HF, and TAF). The laser-cut C-SEMS had higher RFMs than the braided C-SEMSs regardless of expansion state. RF was an important anti-migration property when the C-SEMS did not fully expand. Once fully expanded, stent flare structure variables plays an important role in anti-migration.
Anthropometric predictors of body fat as measured by hydrostatic weighing in Guatemalan adults.
Ramirez-Zea, Manuel; Torun, Benjamin; Martorell, Reynaldo; Stein, Aryeh D
2006-04-01
Most predictive equations currently used to assess percentage body fat (%BF) were derived from persons in industrialized Western societies. We developed equations to predict %BF from anthropometric measurements in rural and urban Guatemalan adults. Body density was measured in 123 women and 114 men by using hydrostatic weighing and simultaneous measurement of residual lung volume. Anthropometric measures included weight (in kg), height (in cm), 4 skinfold thicknesses [(STs) in mm], and 6 circumferences (in cm). Sex-specific multiple linear regression models were developed with %BF as the dependent variable and age, residence (rural or urban), and all anthropometric measures as independent variables (the "full" model). A "simplified" model was developed by using age, residence, weight, height, and arm, abdominal, and calf circumferences as independent variables. The preferred full models were %BF = -80.261 - (weight x 0.623) + (height x 0.214) + (tricipital ST x 0.379) + (abdominal ST x 0.202) + (abdominal circumference x 0.940) + (thigh circumference x 0.316); root mean square error (RMSE) = 3.0; and pure error (PE) = 3.4 for men and %BF = -15.471 + (tricipital ST x 0.332) + (subscapular ST x 0.154) + (abdominal ST x 0.119) + (hip circumference x 0.356); RMSE = 2.4; and PE = 2.9 for women. The preferred simplified models were %BF = -48.472 - (weight x 0.257) + (abdominal circumference x 0.989); RMSE = 3.8; and PE = 3.7 for men and %BF = 19.420 + (weight x 0.385) - (height x 0.215) + (abdominal circumference x 0.265); RMSE = 3.5; and PE = 3.5 for women. These equations performed better in this developing-country population than did previously published equations.
Dynamic stability of passive dynamic walking on an irregular surface.
Su, Jimmy Li-Shin; Dingwell, Jonathan B
2007-12-01
Falls that occur during walking are a significant health problem. One of the greatest impediments to solve this problem is that there is no single obviously "correct" way to quantify walking stability. While many people use variability as a proxy for stability, measures of variability do not quantify how the locomotor system responds to perturbations. The purpose of this study was to determine how changes in walking surface variability affect changes in both locomotor variability and stability. We modified an irreducibly simple model of walking to apply random perturbations that simulated walking over an irregular surface. Because the model's global basin of attraction remained fixed, increasing the amplitude of the applied perturbations directly increased the risk of falling in the model. We generated ten simulations of 300 consecutive strides of walking at each of six perturbation amplitudes ranging from zero (i.e., a smooth continuous surface) up to the maximum level the model could tolerate without falling over. Orbital stability defines how a system responds to small (i.e., "local") perturbations from one cycle to the next and was quantified by calculating the maximum Floquet multipliers for the model. Local stability defines how a system responds to similar perturbations in real time and was quantified by calculating short-term and long-term local exponential rates of divergence for the model. As perturbation amplitudes increased, no changes were seen in orbital stability (r(2)=2.43%; p=0.280) or long-term local instability (r(2)=1.0%; p=0.441). These measures essentially reflected the fact that the model never actually "fell" during any of our simulations. Conversely, the variability of the walker's kinematics increased exponentially (r(2)>or=99.6%; p<0.001) and short-term local instability increased linearly (r(2)=88.1%; p<0.001). These measures thus predicted the increased risk of falling exhibited by the model. For all simulated conditions, the walker remained orbitally stable, while exhibiting substantial local instability. This was because very small initial perturbations diverged away from the limit cycle, while larger initial perturbations converged toward the limit cycle. These results provide insight into how these different proposed measures of walking stability are related to each other and to risk of falling.
A Note on Verification of Computer Simulation Models
ERIC Educational Resources Information Center
Aigner, Dennis J.
1972-01-01
Establishes an argument that questions the validity of one test'' of goodness-of-fit (the extent to which a series of obtained measures agrees with a series of theoretical measures) for the simulated time path of a simple endogenous (internally developed) variable in a simultaneous, perhaps dynamic econometric model. (Author)
The Person Approach: Concepts, Measurement Models, and Research Strategy
ERIC Educational Resources Information Center
Magnusson, David
2003-01-01
This chapter discusses the "person approach" to studying developmental processes by focusing on the distinction and complementarity between this holistic-interactionistic framework and what has become designated as the variable approach. Particular attention is given to measurement models for use in the person approach. The discussion on the…
An Investigation of Calculus Learning Using Factorial Modeling.
ERIC Educational Resources Information Center
Dick, Thomas P.; Balomenos, Richard H.
Structural covariance models that would explain the correlations observed among mathematics achievement and participation measures and related cognitive and affective variables were developed. A sample of college calculus students (N=268; 124 females and 144 males) was administered a battery of cognitive tests (including measures of spatial-visual…
Properties of added variable plots in Cox's regression model.
Lindkvist, M
2000-03-01
The added variable plot is useful for examining the effect of a covariate in regression models. The plot provides information regarding the inclusion of a covariate, and is useful in identifying influential observations on the parameter estimates. Hall et al. (1996) proposed a plot for Cox's proportional hazards model derived by regarding the Cox model as a generalized linear model. This paper proves and discusses properties of this plot. These properties make the plot a valuable tool in model evaluation. Quantities considered include parameter estimates, residuals, leverage, case influence measures and correspondence to previously proposed residuals and diagnostics.
The Development of a New Model of Solar EUV Irradiance Variability
NASA Technical Reports Server (NTRS)
Warren, Harry; Wagner, William J. (Technical Monitor)
2002-01-01
The goal of this research project is the development of a new model of solar EUV (Extreme Ultraviolet) irradiance variability. The model is based on combining differential emission measure distributions derived from spatially and spectrally resolved observations of active regions, coronal holes, and the quiet Sun with full-disk solar images. An initial version of this model was developed with earlier funding from NASA. The new version of the model developed with this research grant will incorporate observations from SoHO as well as updated compilations of atomic data. These improvements will make the model calculations much more accurate.
Systems and methods for modeling and analyzing networks
Hill, Colin C; Church, Bruce W; McDonagh, Paul D; Khalil, Iya G; Neyarapally, Thomas A; Pitluk, Zachary W
2013-10-29
The systems and methods described herein utilize a probabilistic modeling framework for reverse engineering an ensemble of causal models, from data and then forward simulating the ensemble of models to analyze and predict the behavior of the network. In certain embodiments, the systems and methods described herein include data-driven techniques for developing causal models for biological networks. Causal network models include computational representations of the causal relationships between independent variables such as a compound of interest and dependent variables such as measured DNA alterations, changes in mRNA, protein, and metabolites to phenotypic readouts of efficacy and toxicity.
Miskell, Georgia; Salmond, Jennifer A; Williams, David E
2018-04-01
Portable low-cost instruments have been validated and used to measure ambient nitrogen dioxide (NO 2 ) at multiple sites over a small urban area with 20min time resolution. We use these results combined with land use regression (LUR) and rank correlation methods to explore the effects of traffic, urban design features, and local meteorology and atmosphere chemistry on small-scale spatio-temporal variations. We measured NO 2 at 45 sites around the downtown area of Vancouver, BC, in spring 2016, and constructed four different models: i) a model based on averaging concentrations observed at each site over the whole measurement period, and separate temporal models for ii) morning, iii) midday, and iv) afternoon. Redesign of the temporal models using the average model predictors as constants gave three 'hybrid' models that used both spatial and temporal variables. These accounted for approximately 50% of the total variation with mean absolute error±5ppb. Ranking sites by concentration and by change in concentration across the day showed a shift of high NO 2 concentrations across the central city from morning to afternoon. Locations could be identified in which NO 2 concentration was determined by the geography of the site, and others as ones in which the concentration changed markedly from morning to afternoon indicating the importance of temporal controls. Rank correlation results complemented LUR in identifying significant urban design variables that impacted NO 2 concentration. High variability across a relatively small space was partially described by predictor variables related to traffic (bus stop density, speed limits, traffic counts, distance to traffic lights), atmospheric chemistry (ozone, dew point), and environment (land use, trees). A high-density network recording continuously would be needed fully to capture local variations. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamecnik, J. R.; Edwards, T. B.
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by SRNL from 2011 to 2015. The goal of this work was to develop empirical correlations for these variables versus measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) state of the glass from the Defense Waste Processingmore » Facility (DWPF) melter. This report summarizes the initial work on these correlations based on the aforementioned data. Further refinement of the models as additional data is collected is recommended.« less
Kulinkina, Alexandra V; Walz, Yvonne; Koch, Magaly; Biritwum, Nana-Kwadwo; Utzinger, Jürg; Naumova, Elena N
2018-06-04
Schistosomiasis is a water-related neglected tropical disease. In many endemic low- and middle-income countries, insufficient surveillance and reporting lead to poor characterization of the demographic and geographic distribution of schistosomiasis cases. Hence, modeling is relied upon to predict areas of high transmission and to inform control strategies. We hypothesized that utilizing remotely sensed (RS) environmental data in combination with water, sanitation, and hygiene (WASH) variables could improve on the current predictive modeling approaches. Schistosoma haematobium prevalence data, collected from 73 rural Ghanaian schools, were used in a random forest model to investigate the predictive capacity of 15 environmental variables derived from RS data (Landsat 8, Sentinel-2, and Global Digital Elevation Model) with fine spatial resolution (10-30 m). Five methods of variable extraction were tested to determine the spatial linkage between school-based prevalence and the environmental conditions of potential transmission sites, including applying the models to known human water contact locations. Lastly, measures of local water access and groundwater quality were incorporated into RS-based models to assess the relative importance of environmental and WASH variables. Predictive models based on environmental characterization of specific locations where people contact surface water bodies offered some improvement as compared to the traditional approach based on environmental characterization of locations where prevalence is measured. A water index (MNDWI) and topographic variables (elevation and slope) were important environmental risk factors, while overall, groundwater iron concentration predominated in the combined model that included WASH variables. The study helps to understand localized drivers of schistosomiasis transmission. Specifically, unsatisfactory water quality in boreholes perpetuates reliance of surface water bodies, indirectly increasing schistosomiasis risk and resulting in rapid reinfection (up to 40% prevalence six months following preventive chemotherapy). Considering WASH-related risk factors in schistosomiasis prediction can help shift the focus of control strategies from treating symptoms to reducing exposure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackman, C.H.; Douglass, A.R., Chandra, S.; Stolarski, R.S.
1991-03-20
Eight years of NMC (National Meteorological Center) temperature and SBUV (solar backscattered ultraviolet) ozone data were used to calculate the monthly mean heating rates and residual circulation for use in a two-dimensional photochemical model in order to examine the interannual variability of modeled ozone. Fairly good correlations were found in the interannual behavior of modeled and measured SBUV ozone in the upper stratosphere at middle to low latitudes, where temperature dependent photochemistry is thought to dominate ozone behavior. The calculated total ozone is found to be more sensitive to the interannual residual circulation changes than to the interannual temperature changes.more » The magnitude of the modeled ozone variability is similar to the observed variability, but the observed and modeled year to year deviations are mostly uncorrelated. The large component of the observed total ozone variability at low latitudes due to the quasi-biennial oscillation (QBO) is not seen in the modeled total ozone, as only a small QBO signal is present in the heating rates, temperatures, and monthly mean residual circulation. Large interanual changes in tropospheric dynamics are believed to influence the interannual variability in the total ozone, especially at middle and high latitudes. Since these tropospheric changes and most of the QBO forcing are not included in the model formulation, it is not surprising that the interannual variability in total ozione is not well represented in the model computations.« less
Wang, Ching-Yun; Song, Xiao
2017-01-01
SUMMARY Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women’s Health Initiative. PMID:27546625
Connections between Graphical Gaussian Models and Factor Analysis
ERIC Educational Resources Information Center
Salgueiro, M. Fatima; Smith, Peter W. F.; McDonald, John W.
2010-01-01
Connections between graphical Gaussian models and classical single-factor models are obtained by parameterizing the single-factor model as a graphical Gaussian model. Models are represented by independence graphs, and associations between each manifest variable and the latent factor are measured by factor partial correlations. Power calculations…
Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel
NASA Technical Reports Server (NTRS)
Boney, Andy D.
2014-01-01
The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.
ERIC Educational Resources Information Center
Nowak, Christoph; Heinrichs, Nina
2008-01-01
A meta-analysis encompassing all studies evaluating the impact of the Triple P-Positive Parenting Program on parent and child outcome measures was conducted in an effort to identify variables that moderate the program's effectiveness. Hierarchical linear models (HLM) with three levels of data were employed to analyze effect sizes. The results (N =…
Giorgio Vacchiano; John D. Shaw; R. Justin DeRose; James N. Long
2008-01-01
Diameter increment is an important variable in modeling tree growth. Most facets of predicted tree development are dependent in part on diameter or diameter increment, the most commonly measured stand variable. The behavior of the Forest Vegetation Simulator (FVS) largely relies on the performance of the diameter increment model and the subsequent use of predicted dbh...
Karstoft, Karen-Inge; Vedtofte, Mia S.; Nielsen, Anni B.S.; Osler, Merete; Mortensen, Erik L.; Christensen, Gunhild T.; Andersen, Søren B.
2017-01-01
Background Studies of the association between pre-deployment cognitive ability and post-deployment post-traumatic stress disorder (PTSD) have shown mixed results. Aims To study the influence of pre-deployment cognitive ability on PTSD symptoms 6–8 months post-deployment in a large population while controlling for pre-deployment education and deployment-related variables. Method Study linking prospective pre-deployment conscription board data with post-deployment self-reported data in 9695 Danish Army personnel deployed to different war zones in 1997–2013. The association between pre-deployment cognitive ability and post-deployment PTSD was investigated using repeated-measure logistic regression models. Two models with cognitive ability score as the main exposure variable were created (model 1 and model 2). Model 1 was only adjusted for pre-deployment variables, while model 2 was adjusted for both pre-deployment and deployment-related variables. Results When including only variables recorded pre-deployment (cognitive ability score and educational level) and gender (model 1), all variables predicted post-deployment PTSD. When deployment-related variables were added (model 2), this was no longer the case for cognitive ability score. However, when educational level was removed from the model adjusted for deployment-related variables, the association between cognitive ability and post-deployment PTSD became significant. Conclusions Pre-deployment lower cognitive ability did not predict post-deployment PTSD independently of educational level after adjustment for deployment-related variables. Declaration of interest None. Copyright and usage © The Royal College of Psychiatrists 2017. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) license. PMID:29163983
NASA Astrophysics Data System (ADS)
Mangla, Rohit; Kumar, Shashi; Nandy, Subrata
2016-05-01
SAR and LiDAR remote sensing have already shown the potential of active sensors for forest parameter retrieval. SAR sensor in its fully polarimetric mode has an advantage to retrieve scattering property of different component of forest structure and LiDAR has the capability to measure structural information with very high accuracy. This study was focused on retrieval of forest aboveground biomass (AGB) using Terrestrial Laser Scanner (TLS) based point clouds and scattering property of forest vegetation obtained from decomposition modelling of RISAT-1 fully polarimetric SAR data. TLS data was acquired for 14 plots of Timli forest range, Uttarakhand, India. The forest area is dominated by Sal trees and random sampling with plot size of 0.1 ha (31.62m*31.62m) was adopted for TLS and field data collection. RISAT-1 data was processed to retrieve SAR data based variables and TLS point clouds based 3D imaging was done to retrieve LiDAR based variables. Surface scattering, double-bounce scattering, volume scattering, helix and wire scattering were the SAR based variables retrieved from polarimetric decomposition. Tree heights and stem diameters were used as LiDAR based variables retrieved from single tree vertical height and least square circle fit methods respectively. All the variables obtained for forest plots were used as an input in a machine learning based Random Forest Regression Model, which was developed in this study for forest AGB estimation. Modelled output for forest AGB showed reliable accuracy (RMSE = 27.68 t/ha) and a good coefficient of determination (0.63) was obtained through the linear regression between modelled AGB and field-estimated AGB. The sensitivity analysis showed that the model was more sensitive for the major contributed variables (stem diameter and volume scattering) and these variables were measured from two different remote sensing techniques. This study strongly recommends the integration of SAR and LiDAR data for forest AGB estimation.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
Community models for wildlife impact assessment: a review of concepts and approaches
Schroeder, Richard L.
1987-01-01
The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Scattering of Acoustic Waves from Ocean Boundaries
2013-09-30
of predictive models that can account for the all of the physical processes and variability of acoustic propagation and scattering in ocean...collaboration with Dr. Nicholas Chotiros, particularly for theoretical development of bulk acoustic /sediment modeling and laser roughness measurements...G. Potty and J. Miller. Measurement and modeling of Scholte wave dispersion in coastal waters. In Proc. of Third Int. Conf. on Ocean Acoustics
Recent variability of the solar spectral irradiance and its impact on climate modelling
NASA Astrophysics Data System (ADS)
Ermolli, I.; Matthes, K.; Dudok de Wit, T.; Krivova, N. A.; Tourpali, K.; Weber, M.; Unruh, Y. C.; Gray, L.; Langematz, U.; Pilewskie, P.; Rozanov, E.; Schmutz, W.; Shapiro, A.; Solanki, S. K.; Woods, T. N.
2013-04-01
The lack of long and reliable time series of solar spectral irradiance (SSI) measurements makes an accurate quantification of solar contributions to recent climate change difficult. Whereas earlier SSI observations and models provided a qualitatively consistent picture of the SSI variability, recent measurements by the SORCE (SOlar Radiation and Climate Experiment) satellite suggest a significantly stronger variability in the ultraviolet (UV) spectral range and changes in the visible and near-infrared (NIR) bands in anti-phase with the solar cycle. A number of recent chemistry-climate model (CCM) simulations have shown that this might have significant implications on the Earth's atmosphere. Motivated by these results, we summarize here our current knowledge of SSI variability and its impact on Earth's climate. We present a detailed overview of existing SSI measurements and provide thorough comparison of models available to date. SSI changes influence the Earth's atmosphere, both directly, through changes in shortwave (SW) heating and therefore, temperature and ozone distributions in the stratosphere, and indirectly, through dynamical feedbacks. We investigate these direct and indirect effects using several state-of-the art CCM simulations forced with measured and modelled SSI changes. A unique asset of this study is the use of a common comprehensive approach for an issue that is usually addressed separately by different communities. We show that the SORCE measurements are difficult to reconcile with earlier observations and with SSI models. Of the five SSI models discussed here, specifically NRLSSI (Naval Research Laboratory Solar Spectral Irradiance), SATIRE-S (Spectral And Total Irradiance REconstructions for the Satellite era), COSI (COde for Solar Irradiance), SRPM (Solar Radiation Physical Modelling), and OAR (Osservatorio Astronomico di Roma), only one shows a behaviour of the UV and visible irradiance qualitatively resembling that of the recent SORCE measurements. However, the integral of the SSI computed with this model over the entire spectral range does not reproduce the measured cyclical changes of the total solar irradiance, which is an essential requisite for realistic evaluations of solar effects on the Earth's climate in CCMs. We show that within the range provided by the recent SSI observations and semi-empirical models discussed here, the NRLSSI model and SORCE observations represent the lower and upper limits in the magnitude of the SSI solar cycle variation. The results of the CCM simulations, forced with the SSI solar cycle variations estimated from the NRLSSI model and from SORCE measurements, show that the direct solar response in the stratosphere is larger for the SORCE than for the NRLSSI data. Correspondingly, larger UV forcing also leads to a larger surface response. Finally, we discuss the reliability of the available data and we propose additional coordinated work, first to build composite SSI data sets out of scattered observations and to refine current SSI models, and second, to run coordinated CCM experiments.
Learning Instance-Specific Predictive Models
Visweswaran, Shyam; Cooper, Gregory F.
2013-01-01
This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325
Durrieu, Sylvie; Gosselin, Frédéric; Herpigny, Basile
2017-01-01
We explored the potential of airborne laser scanner (ALS) data to improve Bayesian models linking biodiversity indicators of the understory vegetation to environmental factors. Biodiversity was studied at plot level and models were built to investigate species abundance for the most abundant plants found on each study site, and for ecological group richness based on light preference. The usual abiotic explanatory factors related to climate, topography and soil properties were used in the models. ALS data, available for two contrasting study sites, were used to provide biotic factors related to forest structure, which was assumed to be a key driver of understory biodiversity. Several ALS variables were found to have significant effects on biodiversity indicators. However, the responses of biodiversity indicators to forest structure variables, as revealed by the Bayesian model outputs, were shown to be dependent on the abiotic environmental conditions characterizing the study areas. Lower responses were observed on the lowland site than on the mountainous site. In the latter, shade-tolerant and heliophilous species richness was impacted by vegetation structure indicators linked to light penetration through the canopy. However, to reveal the full effects of forest structure on biodiversity indicators, forest structure would need to be measured over much wider areas than the plot we assessed. It seems obvious that the forest structure surrounding the field plots can impact biodiversity indicators measured at plot level. Various scales were found to be relevant depending on: the biodiversity indicators that were modelled, and the ALS variable. Finally, our results underline the utility of lidar data in abundance and richness models to characterize forest structure with variables that are difficult to measure in the field, either due to their nature or to the size of the area they relate to. PMID:28902920
Corron, Louise; Marchal, François; Condemi, Silvana; Telmon, Norbert; Chaumoitre, Kathia; Adalian, Pascal
2018-05-31
Subadult age estimation should rely on sampling and statistical protocols capturing development variability for more accurate age estimates. In this perspective, measurements were taken on the fifth lumbar vertebrae and/or clavicles of 534 French males and females aged 0-19 years and the ilia of 244 males and females aged 0-12 years. These variables were fitted in nonparametric multivariate adaptive regression splines (MARS) models with 95% prediction intervals (PIs) of age. The models were tested on two independent samples from Marseille and the Luis Lopes reference collection from Lisbon. Models using ilium width and module, maximum clavicle length, and lateral vertebral body heights were more than 92% accurate. Precision was lower for postpubertal individuals. Integrating punctual nonlinearities of the relationship between age and the variables and dynamic prediction intervals incorporated the normal increase in interindividual growth variability (heteroscedasticity of variance) with age for more biologically accurate predictions. © 2018 American Academy of Forensic Sciences.
Wittke, Estefânia; Fuchs, Sandra C; Fuchs, Flávio D; Moreira, Leila B; Ferlin, Elton; Cichelero, Fábio T; Moreira, Carolina M; Neyeloff, Jeruza; Moreira, Marina B; Gus, Miguel
2010-11-05
Blood pressure (BP) variability has been associated with cardiovascular outcomes, but there is no consensus about the more effective method to measure it by ambulatory blood pressure monitoring (ABPM). We evaluated the association between three different methods to estimate BP variability by ABPM and the ankle brachial index (ABI). In a cross-sectional study of patients with hypertension, BP variability was estimated by the time rate index (the first derivative of SBP over time), standard deviation (SD) of 24-hour SBP; and coefficient of variability of 24-hour SBP. ABI was measured with a doppler probe. The sample included 425 patients with a mean age of 57 ± 12 years, being 69.2% women, 26.1% current smokers and 22.1% diabetics. Abnormal ABI (≤ 0.90 or ≥ 1.40) was present in 58 patients. The time rate index was 0.516 ± 0.146 mmHg/min in patients with abnormal ABI versus 0.476 ± 0.124 mmHg/min in patients with normal ABI (P = 0.007). In a logistic regression model the time rate index was associated with ABI, regardless of age (OR = 6.9, 95% CI = 1.1- 42.1; P = 0.04). In a multiple linear regression model, adjusting for age, SBP and diabetes, the time rate index was strongly associated with ABI (P < 0.01). None of the other indexes of BP variability were associated with ABI in univariate and multivariate analyses. Time rate index is a sensible method to measure BP variability by ABPM. Its performance for risk stratification of patients with hypertension should be explored in longitudinal studies.
Landau, Sabine; Emsley, Richard; Dunn, Graham
2018-06-01
Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of the three processes involving baseline measures of intermediate or clinical outcomes is operating. Necessary assumptions for the change score approach (B) to provide unbiased estimates under either process include the independence of baseline measures and change scores of the intermediate variable. Finally, estimates provided by the analysis of covariance approach (C) were found to be unbiased under all the three processes considered here. When applied to the example, there was evidence of mediation under all methods but the estimate of the indirect effect depended on the approach used with the proportion mediated varying from 57% to 86%. Trialists planning mediation analyses should measure baseline values of putative mediators as well as of continuous clinical outcomes. An analysis of covariance approach is recommended to avoid potential biases due to confounding processes involving baseline measures of intermediate or clinical outcomes, and not simply for increased precision.
Partitioning neuronal variability
Goris, Robbe L.T.; Movshon, J. Anthony; Simoncelli, Eero P.
2014-01-01
Responses of sensory neurons differ across repeated measurements. This variability is usually treated as stochasticity arising within neurons or neural circuits. However, some portion of the variability arises from fluctuations in excitability due to factors that are not purely sensory, such as arousal, attention, and adaptation. To isolate these fluctuations, we developed a model in which spikes are generated by a Poisson process whose rate is the product of a drive that is sensory in origin, and a gain summarizing stimulus-independent modulatory influences on excitability. This model provides an accurate account of response distributions of visual neurons in macaque LGN, V1, V2, and MT, revealing that variability originates in large part from excitability fluctuations which are correlated over time and between neurons, and which increase in strength along the visual pathway. The model provides a parsimonious explanation for observed systematic dependencies of response variability and covariability on firing rate. PMID:24777419
Prediction of performance on the RCMP physical ability requirement evaluation.
Stanish, H I; Wood, T M; Campagna, P
1999-08-01
The Royal Canadian Mounted Police use the Physical Ability Requirement Evaluation (PARE) for screening applicants. The purposes of this investigation were to identify those field tests of physical fitness that were associated with PARE performance and determine which most accurately classified successful and unsuccessful PARE performers. The participants were 27 female and 21 male volunteers. Testing included measures of aerobic power, anaerobic power, agility, muscular strength, muscular endurance, and body composition. Multiple regression analysis revealed a three-variable model for males (70-lb bench press, standing long jump, and agility) explaining 79% of the variability in PARE time, whereas a one-variable model (agility) explained 43% of the variability for females. Analysis of the classification accuracy of the males' data was prohibited because 91% of the males passed the PARE. Classification accuracy of the females' data, using logistic regression, produced a two-variable model (agility, 1.5-mile endurance run) with 93% overall classification accuracy.
Nathan, Brian J; Golston, Levi M; O'Brien, Anthony S; Ross, Kevin; Harrison, William A; Tao, Lei; Lary, David J; Johnson, Derek R; Covington, April N; Clark, Nigel N; Zondlo, Mark A
2015-07-07
A model aircraft equipped with a custom laser-based, open-path methane sensor was deployed around a natural gas compressor station to quantify the methane leak rate and its variability at a compressor station in the Barnett Shale. The open-path, laser-based sensor provides fast (10 Hz) and precise (0.1 ppmv) measurements of methane in a compact package while the remote control aircraft provides nimble and safe operation around a local source. Emission rates were measured from 22 flights over a one-week period. Mean emission rates of 14 ± 8 g CH4 s(-1) (7.4 ± 4.2 g CH4 s(-1) median) from the station were observed or approximately 0.02% of the station throughput. Significant variability in emission rates (0.3-73 g CH4 s(-1) range) was observed on time scales of hours to days, and plumes showed high spatial variability in the horizontal and vertical dimensions. Given the high spatiotemporal variability of emissions, individual measurements taken over short durations and from ground-based platforms should be used with caution when examining compressor station emissions. More generally, our results demonstrate the unique advantages and challenges of platforms like small unmanned aerial vehicles for quantifying local emission sources to the atmosphere.
Air flow and pollution in a real, heterogeneous urban street canyon: A field and laboratory study
NASA Astrophysics Data System (ADS)
Karra, Styliani; Malki-Epshtein, Liora; Neophytou, Marina K.-A.
2017-09-01
In this work we investigate the influence of real world conditions, including heterogeneity and natural variability of background wind, on the air flow and pollutant concentrations in a heterogeneous urban street canyon using both a series of field measurements and controlled laboratory experiments. Field measurements of wind velocities and Carbon Monoxide (CO) concentrations were taken under field conditions in a heterogeneous street in a city centre at several cross-sections along the length of the street (each cross-section being of different aspect ratio). The real field background wind was in fact observed to be highly variable and thus different Intensive Observation Periods (IOPs) represented by a different mean wind velocity and different wind variability were defined. Observed pollution concentrations reveal high sensitivity to local parameters: there is a bias towards the side closer to the traffic lane; higher concentrations are found in the centre of the street as compared to cross-sections closer to the junctions; higher concentrations are found at 1.5 height from the ground than at 2.5 m height, all of which are of concern regarding pedestrian exposure to traffic-related pollution. A physical model of the same street was produced for the purpose of laboratory experiments, making some geometrical simplifications of complex volumes and extrusions. The physical model was tested in an Atmospheric Boundary Layer water channel, using simultaneously Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF), for flow visualisation as well as for quantitative measurement of concentrations and flow velocities. The wind field conditions were represented by a steady mean approach velocity in the laboratory simulation (essentially representing periods of near-zero wind variability). The laboratory investigations showed a clear sensitivity of the resulting flow field to the local geometry and substantial three-dimensional flow patterns were observed throughout the modelled street. The real-field observations and the laboratory measurements were compared. Overall, we found that lower variability in the background wind does not necessarily ensure a better agreement between the airflow velocity measured in the field and in the lab. In fact, it was observed that in certain cross sections, the airflow was more affected by the particular complex architectural features such as building extrusions and balconies, which were not represented in the simplified physical model tested in the laboratory, than by the real wind field variability. For wind speed comparisons the most favourable agreement (36.6% of the compared values were within a factor of 2) was found in the case of lowest wind variability and in the section with the most simple geometry where the physical lab model was most similar to the real street. For wind direction comparisons the most favourable agreement (45.5% of the compared values was within ±45°) was found in the case with higher wind variability but in the cross-sections with more homogeneous geometrical features. Street canyons are often simplified in research and are often modelled as homogenous symmetrical canyons under steady flow, for practical purposes; our study as a whole demonstrates that natural variability and heterogeneity play a large role in how pollution disperses throughout the street, and therefore further detail in models is vital to understand real world conditions.
Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.
2008-01-01
Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.
A multi-sensor remote sensing approach for measuring primary production from space
NASA Technical Reports Server (NTRS)
Gautier, Catherine
1989-01-01
It is proposed to develop a multi-sensor remote sensing method for computing marine primary productivity from space, based on the capability to measure the primary ocean variables which regulate photosynthesis. The three variables and the sensors which measure them are: (1) downwelling photosynthetically available irradiance, measured by the VISSR sensor on the GOES satellite, (2) sea-surface temperature from AVHRR on NOAA series satellites, and (3) chlorophyll-like pigment concentration from the Nimbus-7/CZCS sensor. These and other measured variables would be combined within empirical or analytical models to compute primary productivity. With this proposed capability of mapping primary productivity on a regional scale, we could begin realizing a more precise and accurate global assessment of its magnitude and variability. Applications would include supplementation and expansion on the horizontal scale of ship-acquired biological data, which is more accurate and which supplies the vertical components of the field, monitoring oceanic response to increased atmospheric carbon dioxide levels, correlation with observed sedimentation patterns and processes, and fisheries management.
Temperature, Pressure, and Infrared Image Survey of an Axisymmetric Heated Exhaust Plume
NASA Technical Reports Server (NTRS)
Nelson, Edward L.; Mahan, J. Robert; Birckelbaw, Larry D.; Turk, Jeffrey A.; Wardwell, Douglas A.; Hange, Craig E.
1996-01-01
The focus of this research is to numerically predict an infrared image of a jet engine exhaust plume, given field variables such as temperature, pressure, and exhaust plume constituents as a function of spatial position within the plume, and to compare this predicted image directly with measured data. This work is motivated by the need to validate computational fluid dynamic (CFD) codes through infrared imaging. The technique of reducing the three-dimensional field variable domain to a two-dimensional infrared image invokes the use of an inverse Monte Carlo ray trace algorithm and an infrared band model for exhaust gases. This report describes an experiment in which the above-mentioned field variables were carefully measured. Results from this experiment, namely tables of measured temperature and pressure data, as well as measured infrared images, are given. The inverse Monte Carlo ray trace technique is described. Finally, experimentally obtained infrared images are directly compared to infrared images predicted from the measured field variables.
Mazerolle, M.J.
2006-01-01
In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.
NASA Astrophysics Data System (ADS)
Wu, Qiang; Zhao, Dekang; Wang, Yang; Shen, Jianjun; Mu, Wenping; Liu, Honglei
2017-11-01
Water inrush from coal-seam floors greatly threatens mining safety in North China and is a complex process controlled by multiple factors. This study presents a mathematical assessment system for coal-floor water-inrush risk based on the variable-weight model (VWM) and unascertained measure theory (UMT). In contrast to the traditional constant-weight model (CWM), which assigns a fixed weight to each factor, the VWM varies with the factor-state value. The UMT employs the confidence principle, which is more effective in ordered partition problems than the maximum membership principle adopted in the former mathematical theory. The method is applied to the Datang Tashan Coal Mine in North China. First, eight main controlling factors are selected to construct the comprehensive evaluation index system. Subsequently, an incentive-penalty variable-weight model is built to calculate the variable weights of each factor. Then, the VWM-UMT model is established using the quantitative risk-grade divide of each factor according to the UMT. On this basis, the risk of coal-floor water inrush in Tashan Mine No. 8 is divided into five grades. For comparison, the CWM is also adopted for the risk assessment, and a differences distribution map is obtained between the two methods. Finally, the verification of water-inrush points indicates that the VWM-UMT model is powerful and more feasible and reasonable. The model has great potential and practical significance in future engineering applications.
Improved estimation of PM2.5 using Lagrangian satellite-measured aerosol optical depth
NASA Astrophysics Data System (ADS)
Olivas Saunders, Rolando
Suspended particulate matter (aerosols) with aerodynamic diameters less than 2.5 mum (PM2.5) has negative effects on human health, plays an important role in climate change and also causes the corrosion of structures by acid deposition. Accurate estimates of PM2.5 concentrations are thus relevant in air quality, epidemiology, cloud microphysics and climate forcing studies. Aerosol optical depth (AOD) retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) satellite instrument has been used as an empirical predictor to estimate ground-level concentrations of PM2.5 . These estimates usually have large uncertainties and errors. The main objective of this work is to assess the value of using upwind (Lagrangian) MODIS-AOD as predictors in empirical models of PM2.5. The upwind locations of the Lagrangian AOD were estimated using modeled backward air trajectories. Since the specification of an arrival elevation is somewhat arbitrary, trajectories were calculated to arrive at four different elevations at ten measurement sites within the continental United States. A systematic examination revealed trajectory model calculations to be sensitive to starting elevation. With a 500 m difference in starting elevation, the 48-hr mean horizontal separation of trajectory endpoints was 326 km. When the difference in starting elevation was doubled and tripled to 1000 m and 1500m, the mean horizontal separation of trajectory endpoints approximately doubled and tripled to 627 km and 886 km, respectively. A seasonal dependence of this sensitivity was also found: the smallest mean horizontal separation of trajectory endpoints was exhibited during the summer and the largest separations during the winter. A daily average AOD product was generated and coupled to the trajectory model in order to determine AOD values upwind of the measurement sites during the period 2003-2007. Empirical models that included in situ AOD and upwind AOD as predictors of PM2.5 were generated by multivariate linear regressions using the least squares method. The multivariate models showed improved performance over the single variable regression (PM2.5 and in situ AOD) models. The statistical significance of the improvement of the multivariate models over the single variable regression models was tested using the extra sum of squares principle. In many cases, even when the R-squared was high for the multivariate models, the improvement over the single models was not statistically significant. The R-squared of these multivariate models varied with respect to seasons, with the best performance occurring during the summer months. A set of seasonal categorical variables was included in the regressions to exploit this variability. The multivariate regression models that included these categorical seasonal variables performed better than the models that didn't account for seasonal variability. Furthermore, 71% of these regressions exhibited improvement over the single variable models that was statistically significant at a 95% confidence level.
Effects of demographic and health variables on Rasch scaled cognitive scores.
Zelinski, Elizabeth M; Gilewski, Michael J
2003-08-01
To determine whether demographic and health variables interact to predict cognitive scores in Asset and Health Dynamics of the Oldest-Old (AHEAD), a representative survey of older Americans, as a test of the developmental discontinuity hypothesis. Rasch modeling procedures were used to rescale cognitive measures into interval scores, equating scales across measures, making it possible to compare predictor effects directly. Rasch scaling also reduces the likelihood of obtaining spurious interactions. Tasks included combined immediate and delayed recall, the Telephone Interview for Cognitive Status (TICS), Series 7, and an overall cognitive score. Demographic variables most strongly predicted performance on all scores, with health variables having smaller effects. Age interacted with both demographic and health variables, but patterns of effects varied. Demographic variables have strong effects on cognition. The developmental discontinuity hypothesis that health variables have stronger effects than demographic ones on cognition in older adults was not supported.
NASA Astrophysics Data System (ADS)
Wang, Kaicun; Ma, Qian; Li, Zhijun; Wang, Jiankai
2015-07-01
Existing studies have shown that observed surface incident solar radiation (Rs) over China may have important inhomogeneity issues. This study provides metadata and reference data to homogenize observed Rs, from which the decadal variability of Rs over China can be accurately derived. From 1958 to 1990, diffuse solar radiation (Rsdif) and direct solar radiation (Rsdir) were measured separately, and Rs was calculated as their sum. The pyranometers used to measure Rsdif had a strong sensitivity drift problem, which introduced a spurious decreasing trend into the observed Rsdif and Rs data, whereas the observed Rsdir did not suffer from this sensitivity drift problem. From 1990 to 1993, instruments and measurement methods were replaced and measuring stations were restructured in China, which introduced an abrupt increase in the observed Rs. Intercomparisons between observation-based and model-based Rs performed in this research show that sunshine duration (SunDu)-derived Rs is of high quality and can be used as reference data to homogenize observed Rs data. The homogenized and adjusted data of observed Rs combines the advantages of observed Rs in quantifying hourly to monthly variability and SunDu-derived Rs in depicting decadal variability and trend. Rs averaged over 105 stations in China decreased at -2.9 W m-2 per decade from 1961 to 1990 and remained stable afterward. This decadal variability is confirmed by the observed Rsdir and diurnal temperature ranges, and can be reproduced by high-quality Earth System Models. However, neither satellite retrievals nor reanalyses can accurately reproduce such decadal variability over China.
Analysis of the impact of safeguards criteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mullen, M.F.; Reardon, P.T.
As part of the US Program of Technical Assistance to IAEA Safeguards, the Pacific Northwest Laboratory (PNL) was asked to assist in developing and demonstrating a model for assessing the impact of setting criteria for the application of IAEA safeguards. This report presents the results of PNL's work on the task. The report is in three parts. The first explains the technical approach and methodology. The second contains an example application of the methodology. The third presents the conclusions of the study. PNL used the model and computer programs developed as part of Task C.5 (Estimation of Inspection Efforts) ofmore » the Program of Technical Assistance. The example application of the methodology involves low-enriched uranium conversion and fuel fabrication facilities. The effects of variations in seven parameters are considered: false alarm probability, goal probability of detection, detection goal quantity, the plant operator's measurement capability, the inspector's variables measurement capability, the inspector's attributes measurement capability, and annual plant throughput. Among the key results and conclusions of the analysis are the following: the variables with the greatest impact on the probability of detection are the inspector's measurement capability, the goal quantity, and the throughput; the variables with the greatest impact on inspection costs are the throughput, the goal quantity, and the goal probability of detection; there are important interactions between variables. That is, the effects of a given variable often depends on the level or value of some other variable. With the methodology used in this study, these interactions can be quantitatively analyzed; reasonably good approximate prediction equations can be developed using the methodology described here.« less
Measurement of psychological disorders using cognitive diagnosis models.
Templin, Jonathan L; Henson, Robert A
2006-09-01
Cognitive diagnosis models are constrained (multiple classification) latent class models that characterize the relationship of questionnaire responses to a set of dichotomous latent variables. Having emanated from educational measurement, several aspects of such models seem well suited to use in psychological assessment and diagnosis. This article presents the development of a new cognitive diagnosis model for use in psychological assessment--the DINO (deterministic input; noisy "or" gate) model--which, as an illustrative example, is applied to evaluate and diagnose pathological gamblers. As part of this example, a demonstration of the estimates obtained by cognitive diagnosis models is provided. Such estimates include the probability an individual meets each of a set of dichotomous Diagnostic and Statistical Manual of Mental Disorders (text revision [DSM-IV-TR]; American Psychiatric Association, 2000) criteria, resulting in an estimate of the probability an individual meets the DSM-IV-TR definition for being a pathological gambler. Furthermore, a demonstration of how the hypothesized underlying factors contributing to pathological gambling can be measured with the DINO model is presented, through use of a covariance structure model for the tetrachoric correlation matrix of the dichotomous latent variables representing DSM-IV-TR criteria. Copyright 2006 APA
NASA Technical Reports Server (NTRS)
Pfister, G. G.; Emmons, L. K.; Edwards, D. P.; Arellano, A.; Sachse, G.; Campos, T.
2010-01-01
We analyze the transport of pollution across the Pacific during the NASA INTEX-B (Intercontinental Chemical Transport Experiment Part 8) campaign in spring 2006 and examine how this year compares to the time period for 2000 through 2006. In addition to aircraft measurements of carbon monoxide (CO) collected during INTEX-B, we include in this study multi-year satellite retrievals of CO from the Measurements of Pollution in the Troposphere (MOPITT) instrument and simulations from the chemistry transport model MOZART-4. Model tracers are used to examine the contributions of different source regions and source types to pollution levels over the Pacific. Additional modeling studies are performed to separate the impacts of inter-annual variability in meteorology and .dynamics from changes in source strength. interannual variability in the tropospheric CO burden over the Pacific and the US as estimated from the MOPITT data range up to 7% and a somewhat smaller estimate (5%) is derived from the model. When keeping the emissions in the model constant between years, the year-to-year changes are reduced (2%), but show that in addition to changes in emissions, variable meteorological conditions also impact transpacific pollution transport. We estimate that about 113 of the variability in the tropospheric CO loading over the contiguous US is explained by changes in emissions and about 213 by changes in meteorology and transport. Biomass burning sources are found to be a larger driver for inter-annual variability in the CO loading compared to fossil and biofuel sources or photochemical CO production even though their absolute contributions are smaller. Source contribution analysis shows that the aircraft sampling during INTEX-B was fairly representative of the larger scale region, but with a slight bias towards higher influence from Asian contributions.
González-Ferreiro, Eduardo; Arellano-Pérez, Stéfano; Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Álvarez-González, Juan Gabriel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard.
NASA Technical Reports Server (NTRS)
Johnson, R. A.; Wehrly, T.
1976-01-01
Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.
Nikolić, Biljana; Martinović, Jelena; Matić, Milan; Stefanović, Đorđe
2018-05-29
Different variables determine the performance of cyclists, which brings up the question how these parameters may help in their classification by specialty. The aim of the study was to determine differences in cardiorespiratory parameters of male cyclists according to their specialty, flat rider (N=21), hill rider (N=35) and sprinter (N=20) and obtain the multivariate model for further cyclists classification by specialties, based on selected variables. Seventeen variables were measured at submaximal and maximum load on the cycle ergometer Cosmed E 400HK (Cosmed, Rome, Italy) (initial 100W with 25W increase, 90-100 rpm). Multivariate discriminant analysis was used to determine which variables group cyclists within their specialty, and to predict which variables can direct cyclists to a particular specialty. Among nine variables that statistically contribute to the discriminant power of the model, achieved power on the anaerobic threshold and the produced CO2 had the biggest impact. The obtained discriminatory model correctly classified 91.43% of flat riders, 85.71% of hill riders, while sprinters were classified completely correct (100%), i.e. 92.10% of examinees were correctly classified, which point out the strength of the discriminatory model. Respiratory indicators mostly contribute to the discriminant power of the model, which may significantly contribute to training practice and laboratory tests in future.
Liu, Shuguang; Tan, Zhengxi; Chen, Mingshi; Liu, Jinxun; Wein, Anne; Li, Zhengpeng; Huang, Shengli; Oeding, Jennifer; Young, Claudia; Verma, Shashi B.; Suyker, Andrew E.; Faulkner, Stephen P.
2012-01-01
The General Ensemble Biogeochemical Modeling System (GEMS) was es in individual models, it uses multiple site-scale biogeochemical models to perform model simulations. Second, it adopts Monte Carlo ensemble simulations of each simulation unit (one site/pixel or group of sites/pixels with similar biophysical conditions) to incorporate uncertainties and variability (as measured by variances and covariance) of input variables into model simulations. In this chapter, we illustrate the applications of GEMS at the site and regional scales with an emphasis on incorporating agricultural practices. Challenges in modeling soil carbon dynamics and greenhouse emissions are also discussed.
Meteorological Contribution to Variability in Particulate Matter Concentrations
NASA Astrophysics Data System (ADS)
Woods, H. L.; Spak, S. N.; Holloway, T.
2006-12-01
Local concentrations of fine particulate matter (PM) are driven by a number of processes, including emissions of aerosols and gaseous precursors, atmospheric chemistry, and meteorology at local, regional, and global scales. We apply statistical downscaling methods, typically used for regional climate analysis, to estimate the contribution of regional scale meteorology to PM mass concentration variability at a range of sites in the Upper Midwestern U.S. Multiple years of daily PM10 and PM2.5 data, reported by the U.S. Environmental Protection Agency (EPA), are correlated with large-scale meteorology over the region from the National Centers for Environmental Prediction (NCEP) reanalysis data. We use two statistical downscaling methods (multiple linear regression, MLR, and analog) to identify which processes have the greatest impact on aerosol concentration variability. Empirical Orthogonal Functions of the NCEP meteorological data are correlated with PM timeseries at measurement sites. We examine which meteorological variables exert the greatest influence on PM variability, and which sites exhibit the greatest response to regional meteorology. To evaluate model performance, measurement data are withheld for limited periods, and compared with model results. Preliminary results suggest that regional meteorological processes account over 50% of aerosol concentration variability at study sites.
NASA Astrophysics Data System (ADS)
Offerle, Brian
Urban environmental problems related to air quality, thermal stress, issues of water demand and quality, all of which are linked directly or indirectly to urban climate, are emerging as major environmental concerns at the start of the 21st century. Thus there are compelling social, political and economic, and scientific reasons that make the study and understanding of the fundamental causes of urban climates critically important. This research addresses these topics through an intensive study of the surface energy balance of Lodz, Poland. The research examines the temporal variability in long-term measurements of urban surface-atmosphere exchange at a downtown location and the spatial variability of this exchange over distinctly different neighborhoods using shorter-term observations. These observations provide the basis for an evaluation of surface energy balance models. Monthly patterns in energy exchange are consistent from year-to-year with variability determined by net radiation and the timing and amount of precipitation. Spatial variability can be determined from plan area fractions of vegetation and impervious surface, though heat storage exerts a strong control on shorter term variability of energy exchange, within and between locations in an urban area. Anthropogenic heat fluxes provide most of the energy driving surface-atmosphere exchange in winter, From a modeling perspective, sensible heat fluxes can be reliably determined from radiometrically sensed surface temperatures and spatially representative surface-atmosphere exchange in an urban area can be determined from satellite remote sensing products. Models of the urban surface energy balance showed good agreement with mean values of energy exchange and under most conditions represented the temporal variability due to synoptic and shorter time scale forcing well.
Harmon, Brook E.; Nigg, Claudio R.; Long, Camonia; Amato, Katie; Anwar, Mahabub-Ul; Kutchman, Eve; Anthamatten, Peter; Browning, Raymond C.; Brink, Lois; Hill, James O.
2014-01-01
Objectives Social Cognitive Theory (SCT) has often been used as a guide to predict and modify physical activity (PA) behavior. We assessed the ability of commonly investigated SCT variables and perceived school environment variables to predict PA among elementary students. We also examined differences in influences between Hispanic and non-Hispanic students. Design This analysis used baseline data collected from eight schools who participated in a four-year study of a combined school-day curriculum and environmental intervention. Methods Data were collected from 393 students. A 3-step linear regression was used to measure associations between PA level, SCT variables (self-efficacy, social support, enjoyment), and perceived environment variables (schoolyard structures, condition, equipment/supervision). Logistic regression assessed associations between variables and whether students met PA recommendations. Results School and sex explained 6% of the moderate-to-vigorous PA models' variation. SCT variables explained an additional 15% of the models' variation, with much of the model's predictive ability coming from self-efficacy and social support. Sex was more strongly associated with PA level among Hispanic students, while self-efficacy was more strongly associated among non-Hispanic students. Perceived environment variables contributed little to the models. Conclusions Our findings add to the literature on the influences of PA among elementary-aged students. The differences seen in the influence of sex and self-efficacy among non-Hispanic and Hispanic students suggests these are areas where PA interventions could be tailored to improve efficacy. Additional research is needed to understand if different measures of perceived environment or perceptions at different ages may better predict PA. PMID:24772004
Rothman, Michael J; Rothman, Steven I; Beals, Joseph
2013-10-01
Patient condition is a key element in communication between clinicians. However, there is no generally accepted definition of patient condition that is independent of diagnosis and that spans acuity levels. We report the development and validation of a continuous measure of general patient condition that is independent of diagnosis, and that can be used for medical-surgical as well as critical care patients. A survey of Electronic Medical Record data identified common, frequently collected non-static candidate variables as the basis for a general, continuously updated patient condition score. We used a new methodology to estimate in-hospital risk associated with each of these variables. A risk function for each candidate input was computed by comparing the final pre-discharge measurements with 1-year post-discharge mortality. Step-wise logistic regression of the variables against 1-year mortality was used to determine the importance of each variable. The final set of selected variables consisted of 26 clinical measurements from four categories: nursing assessments, vital signs, laboratory results and cardiac rhythms. We then constructed a heuristic model quantifying patient condition (overall risk) by summing the single-variable risks. The model's validity was assessed against outcomes from 170,000 medical-surgical and critical care patients, using data from three US hospitals. Outcome validation across hospitals yields an area under the receiver operating characteristic curve(AUC) of ≥0.92 when separating hospice/deceased from all other discharge categories, an AUC of ≥0.93 when predicting 24-h mortality and an AUC of 0.62 when predicting 30-day readmissions. Correspondence with outcomes reflective of patient condition across the acuity spectrum indicates utility in both medical-surgical units and critical care units. The model output, which we call the Rothman Index, may provide clinicians with a longitudinal view of patient condition to help address known challenges in caregiver communication, continuity of care, and earlier detection of acuity trends. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Model Attitude and Deformation Measurements at the NASA Glenn Research Center
NASA Technical Reports Server (NTRS)
Woike, Mark R.
2008-01-01
The NASA Glenn Research Center is currently participating in an American Institute of Aeronautics and Astronautics (AIAA) sponsored Model Attitude and Deformation Working Group. This working group is chartered to develop a best practices document dealing with the measurement of two primary areas of wind tunnel measurements, 1) model attitude including alpha, beta and roll angle, and 2) model deformation. Model attitude is a principle variable in making aerodynamic and force measurements in a wind tunnel. Model deformation affects measured forces, moments and other measured aerodynamic parameters. The working group comprises of membership from industry, academia, and the Department of Defense (DoD). Each member of the working group gave a presentation on the methods and techniques that they are using to make model attitude and deformation measurements. This presentation covers the NASA Glenn Research Center s approach in making model attitude and deformation measurements.
Modelling the co-evolution of indirect genetic effects and inherited variability.
Marjanovic, Jovana; Mulder, Han A; Rönnegård, Lars; Bijma, Piter
2018-03-28
When individuals interact, their phenotypes may be affected not only by their own genes but also by genes in their social partners. This phenomenon is known as Indirect Genetic Effects (IGEs). In aquaculture species and some plants, however, competition not only affects trait levels of individuals, but also inflates variability of trait values among individuals. In the field of quantitative genetics, the variability of trait values has been studied as a quantitative trait in itself, and is often referred to as inherited variability. Such studies, however, consider only the genetic effect of the focal individual on trait variability and do not make a connection to competition. Although the observed phenotypic relationship between competition and variability suggests an underlying genetic relationship, the current quantitative genetic models of IGE and inherited variability do not allow for such a relationship. The lack of quantitative genetic models that connect IGEs to inherited variability limits our understanding of the potential of variability to respond to selection, both in nature and agriculture. Models of trait levels, for example, show that IGEs may considerably change heritable variation in trait values. Currently, we lack the tools to investigate whether this result extends to variability of trait values. Here we present a model that integrates IGEs and inherited variability. In this model, the target phenotype, say growth rate, is a function of the genetic and environmental effects of the focal individual and of the difference in trait value between the social partner and the focal individual, multiplied by a regression coefficient. The regression coefficient is a genetic trait, which is a measure of cooperation; a negative value indicates competition, a positive value cooperation, and an increasing value due to selection indicates the evolution of cooperation. In contrast to the existing quantitative genetic models, our model allows for co-evolution of IGEs and variability, as the regression coefficient can respond to selection. Our simulations show that the model results in increased variability of body weight with increasing competition. When competition decreases, i.e., cooperation evolves, variability becomes significantly smaller. Hence, our model facilitates quantitative genetic studies on the relationship between IGEs and inherited variability. Moreover, our findings suggest that we may have been overlooking an entire level of genetic variation in variability, the one due to IGEs.
Sun, Jennifer K.; Qin, Haijing; Aiello, Lloyd Paul; Melia, Michele; Beck, Roy W.; Andreoli, Christopher M.; Edwards, Paul A.; Glassman, Adam R.; Pavlica, Michael R.
2012-01-01
Objective To compare visual acuity (VA) scores after autorefraction versus research protocol manual refraction in eyes of patients with diabetes and a wide range of VA. Methods Electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) VA Test© letter score (EVA) was measured after autorefraction (AR-EVA) and after Diabetic Retinopathy Clinical Research Network (DRCR.net) protocol manual refraction (MR-EVA). Testing order was randomized, study participants and VA examiners were masked to refraction source, and a second EVA utilizing an identical manual refraction (MR-EVAsupl) was performed to determine test-retest variability. Results In 878 eyes of 456 study participants, median MR-EVA was 74 (Snellen equivalent approximately 20/32). Spherical equivalent was often similar for manual and autorefraction (median difference: 0.00, 5th and 95th percentiles −1.75 to +1.13 Diopters). However, on average, MR-EVA results were slightly better than AR-EVA results across the entire VA range. Furthermore, variability between AR-EVA and MR-EVA was substantially greater than the test-retest variability of MR-EVA (P<0.001). Variability of differences was highly dependent on autorefractor model. Conclusions Across a wide range of VA at multiple sites using a variety of autorefractors, VA measurements tend to be worse with autorefraction than manual refraction. Differences between individual autorefractor models were identified. However, even among autorefractor models comparing most favorably to manual refraction, VA variability between autorefraction and manual refraction is higher than the test-retest variability of manual refraction. The results suggest that with current instruments, autorefraction is not an acceptable substitute for manual refraction for most clinical trials with primary outcomes dependent on best-corrected VA. PMID:22159173
Predictors of persistent pain after total knee arthroplasty: a systematic review and meta-analysis.
Lewis, G N; Rice, D A; McNair, P J; Kluger, M
2015-04-01
Several studies have identified clinical, psychosocial, patient characteristic, and perioperative variables that are associated with persistent postsurgical pain; however, the relative effect of these variables has yet to be quantified. The aim of the study was to provide a systematic review and meta-analysis of predictor variables associated with persistent pain after total knee arthroplasty (TKA). Included studies were required to measure predictor variables prior to or at the time of surgery, include a pain outcome measure at least 3 months post-TKA, and include a statistical analysis of the effect of the predictor variable(s) on the outcome measure. Counts were undertaken of the number of times each predictor was analysed and the number of times it was found to have a significant relationship with persistent pain. Separate meta-analyses were performed to determine the effect size of each predictor on persistent pain. Outcomes from studies implementing uni- and multivariable statistical models were analysed separately. Thirty-two studies involving almost 30 000 patients were included in the review. Preoperative pain was the predictor that most commonly demonstrated a significant relationship with persistent pain across uni- and multivariable analyses. In the meta-analyses of data from univariate models, the largest effect sizes were found for: other pain sites, catastrophizing, and depression. For data from multivariate models, significant effects were evident for: catastrophizing, preoperative pain, mental health, and comorbidities. Catastrophizing, mental health, preoperative knee pain, and pain at other sites are the strongest independent predictors of persistent pain after TKA. © The Author 2014. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Genetic influences on heart rate variability
Golosheykin, Simon; Grant, Julia D.; Novak, Olga V.; Heath, Andrew C.; Anokhin, Andrey P.
2016-01-01
Heart rate variability (HRV) is the variation of cardiac inter-beat intervals over time resulting largely from the interplay between the sympathetic and parasympathetic branches of the autonomic nervous system. Individual differences in HRV are associated with emotion regulation, personality, psychopathology, cardiovascular health, and mortality. Previous studies have shown significant heritability of HRV measures. Here we extend genetic research on HRV by investigating sex differences in genetic underpinnings of HRV, the degree of genetic overlap among different measurement domains of HRV, and phenotypic and genetic relationships between HRV and the resting heart rate (HR). We performed electrocardiogram (ECG) recordings in a large population-representative sample of young adult twins (n = 1060 individuals) and computed HRV measures from three domains: time, frequency, and nonlinear dynamics. Genetic and environmental influences on HRV measures were estimated using linear structural equation modeling of twin data. The results showed that variability of HRV and HR measures can be accounted for by additive genetic and non-shared environmental influences (AE model), with no evidence for significant shared environmental effects. Heritability estimates ranged from 47 to 64%, with little difference across HRV measurement domains. Genetic influences did not differ between genders for most variables except the square root of the mean squared differences between successive R-R intervals (RMSSD, higher heritability in males) and the ratio of low to high frequency power (LF/HF, distinct genetic factors operating in males and females). The results indicate high phenotypic and especially genetic correlations between HRV measures from different domains, suggesting that >90% of genetic influences are shared across measures. Finally, about 40% of genetic variance in HRV was shared with HR. In conclusion, both HR and HRV measures are highly heritable traits in the general population of young adults, with high degree of genetic overlap across different measurement domains. PMID:27114045
NASA Astrophysics Data System (ADS)
Raz-Yaseef, N.; Sonnentag, O.; Kobayashi, H.; Chen, J. M.; Verfaillie, J. G.; Ma, S.; Baldocchi, D. D.
2011-12-01
Semi-arid climates experience large seasonal and inter-annual variability in radiation and precipitation, creating natural conditions adequate to study how year-to-year changes affect atmosphere-biosphere fluxes. Especially, savanna ecosystems, that combine tree and below-canopy components, create a unique environment in which phenology dramatically changes between seasons. We used a 10-year flux database in order to define seasonal and interannual variability of climatic inputs and fluxes, and evaluate model capability to reproduce observed variability. This is based on the perception that model capability to construct the deviation, and not the average, is important in order to correctly predict ecosystem sensitivity to climate change. Our research site is a low density and low LAI (0.8) semi-arid savanna, located at Tonzi Ranch, Northern California. In this system, trees are active during the warm season (Mar - Oct), and grasses are active during the wet season (Dec - May). Measurements of carbon and water fluxes above and below the tree canopy using eddy covariance and supplementary measurements have been made since 2001. Fluxes were simulated using bio-meteorological process-oriented ecosystem models: BEPS and 3D-CAONAK. Models were partly capable of reproducing fluxes on daily scales (R2=0.66). We then compared model outputs for different ecosystem components and seasons, and found distinct seasons with high correlations while other seasons were purely represented. Comparison was much higher for ET than for GPP. The understory was better simulated than the overstory. CANOAK overestimated spring understory fluxes, probably due to the capability to directly calculated 3D radiative transfer. BEPS underestimated spring understory fluxes, following the pre-description of grass die-off. Both models underestimated peak spring overstory fluxes. During winter tree dormant, modeled fluxes were null, but occasional high fluxes of both ET and GPP were measured following precipitation events, likely produced by an adverse measurement effect. This analysis enabled to pinpoint specific areas where models break, and stress that model capability to reproduce fluxes vary among seasons and ecosystem components. The combined response was such, that comparison decreases when ecosystem fluxes were partitioned between overstory and understory fluxes. Model performance decreases with time scale; while performance was high for some seasons, models were less capable of reproducing the high variability in understory fluxes vs. the conservative overstory fluxes on annual scales. Discrepancies were not always a result of models' faults; comparison largely improved when measurements of overstory fluxes during precipitation events were excluded. Conclusions raised from this research enable to answer the critical question of the level and type of details needed in order to correctly predict ecosystem respond to environmental and climatic change.
Walz, Yvonne; Wegmann, Martin; Dech, Stefan; Vounatsou, Penelope; Poda, Jean-Noël; N'Goran, Eliézer K.; Utzinger, Jürg; Raso, Giovanna
2015-01-01
Background Schistosomiasis is the most widespread water-based disease in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and human water contact patterns. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. We investigated the potential of remote sensing to characterize habitat conditions of parasite and intermediate host snails and discuss the relevance for public health. Methodology We employed high-resolution remote sensing data, environmental field measurements, and ecological data to model environmental suitability for schistosomiasis-related parasite and snail species. The model was developed for Burkina Faso using a habitat suitability index (HSI). The plausibility of remote sensing habitat variables was validated using field measurements. The established model was transferred to different ecological settings in Côte d’Ivoire and validated against readily available survey data from school-aged children. Principal Findings Environmental suitability for schistosomiasis transmission was spatially delineated and quantified by seven habitat variables derived from remote sensing data. The strengths and weaknesses highlighted by the plausibility analysis showed that temporal dynamic water and vegetation measures were particularly useful to model parasite and snail habitat suitability, whereas the measurement of water surface temperature and topographic variables did not perform appropriately. The transferability of the model showed significant relations between the HSI and infection prevalence in study sites of Côte d’Ivoire. Conclusions/Significance A predictive map of environmental suitability for schistosomiasis transmission can support measures to gain and sustain control. This is particularly relevant as emphasis is shifting from morbidity control to interrupting transmission. Further validation of our mechanistic model needs to be complemented by field data of parasite- and snail-related fitness. Our model provides a useful tool to monitor the development of new hotspots of potential schistosomiasis transmission based on regularly updated remote sensing data. PMID:26587839
Walz, Yvonne; Wegmann, Martin; Dech, Stefan; Vounatsou, Penelope; Poda, Jean-Noël; N'Goran, Eliézer K; Utzinger, Jürg; Raso, Giovanna
2015-11-01
Schistosomiasis is the most widespread water-based disease in sub-Saharan Africa. Transmission is governed by the spatial distribution of specific freshwater snails that act as intermediate hosts and human water contact patterns. Remote sensing data have been utilized for spatially explicit risk profiling of schistosomiasis. We investigated the potential of remote sensing to characterize habitat conditions of parasite and intermediate host snails and discuss the relevance for public health. We employed high-resolution remote sensing data, environmental field measurements, and ecological data to model environmental suitability for schistosomiasis-related parasite and snail species. The model was developed for Burkina Faso using a habitat suitability index (HSI). The plausibility of remote sensing habitat variables was validated using field measurements. The established model was transferred to different ecological settings in Côte d'Ivoire and validated against readily available survey data from school-aged children. Environmental suitability for schistosomiasis transmission was spatially delineated and quantified by seven habitat variables derived from remote sensing data. The strengths and weaknesses highlighted by the plausibility analysis showed that temporal dynamic water and vegetation measures were particularly useful to model parasite and snail habitat suitability, whereas the measurement of water surface temperature and topographic variables did not perform appropriately. The transferability of the model showed significant relations between the HSI and infection prevalence in study sites of Côte d'Ivoire. A predictive map of environmental suitability for schistosomiasis transmission can support measures to gain and sustain control. This is particularly relevant as emphasis is shifting from morbidity control to interrupting transmission. Further validation of our mechanistic model needs to be complemented by field data of parasite- and snail-related fitness. Our model provides a useful tool to monitor the development of new hotspots of potential schistosomiasis transmission based on regularly updated remote sensing data.
Organic carbon stock modelling for the quantification of the carbon sinks in terrestrial ecosystems
NASA Astrophysics Data System (ADS)
Durante, Pilar; Algeet, Nur; Oyonarte, Cecilio
2017-04-01
Given the recent environmental policies derived from the serious threats caused by global change, practical measures to decrease net CO2 emissions have to be put in place. Regarding this, carbon sequestration is a major measure to reduce atmospheric CO2 concentrations within a short and medium term, where terrestrial ecosystems play a basic role as carbon sinks. Development of tools for quantification, assessment and management of organic carbon in ecosystems at different scales and management scenarios, it is essential to achieve these commitments. The aim of this study is to establish a methodological framework for the modeling of this tool, applied to a sustainable land use planning and management at spatial and temporal scale. The methodology for carbon stock estimation in ecosystems is based on merger techniques between carbon stored in soils and aerial biomass. For this purpose, both spatial variability map of soil organic carbon (SOC) and algorithms for calculation of forest species biomass will be created. For the modelling of the SOC spatial distribution at different map scales, it is necessary to fit in and screen the available information of soil database legacy. Subsequently, SOC modelling will be based on the SCORPAN model, a quantitative model use to assess the correlation among soil-forming factors measured at the same site location. These factors will be selected from both static (terrain morphometric variables) and dynamic variables (climatic variables and vegetation indexes -NDVI-), providing to the model the spatio-temporal characteristic. After the predictive model, spatial inference techniques will be used to achieve the final map and to extrapolate the data to unavailable information areas (automated random forest regression kriging). The estimated uncertainty will be calculated to assess the model performance at different scale approaches. Organic carbon modelling of aerial biomass will be estimate using LiDAR (Light Detection And Ranging) algorithms. The available LiDAR databases will be used. LiDAR statistics (which describe the LiDAR cloud point data to calculate forest stand parameters) will be correlated with different canopy cover variables. The regression models applied to the total area will produce a continuous geo-information map to each canopy variable. The CO2 estimation will be calculated by dry-mass conversion factors for each forest species (C kg-CO2 kg equivalent). The result is the organic carbon modelling at spatio-temporal scale with different levels of uncertainty associated to the predictive models and diverse detailed scales. However, one of the main expected problems is due to the heterogeneous spatial distribution of the soil information, which influences on the prediction of the models at different spatial scales and, consequently, at SOC map scale. Besides this, the variability and mixture of the forest species of the aerial biomass decrease the accuracy assessment of the organic carbon.
Enhanced future variability during India's rainy season
NASA Astrophysics Data System (ADS)
Menon, Arathy; Levermann, Anders; Schewe, Jacob
2013-04-01
The Indian summer monsoon shapes the livelihood of a large share of the world's population. About 80% of annual precipitation over India occurs during the monsoon season from June through September. Next to its seasonal mean rainfall the day-to-day variability is crucial for the risk of flooding, national water supply and agricultural productivity. Here we show that the latest ensemble of climate model simulations, prepared for the IPCC's AR-5, consistently projects significant increases in day-to-day rainfall variability under unmitigated climate change. While all models show an increase in day-to-day variability, some models are more realistic in capturing the observed seasonal mean rainfall over India than others. While no model's monsoon rainfall exceeds the observed value by more than two standard deviations, half of the models simulate a significantly weaker monsoon than observed. The relative increase in day-to-day variability by the year 2100 ranges from 15% to 48% under the strongest scenario (RCP-8.5), in the ten models which capture seasonal mean rainfall closest to observations. The variability increase per degree of global warming is independent of the scenario in most models, and is 8% +/- 4% per K on average. This consistent projection across 20 comprehensive climate models provides confidence in the results and suggests the necessity of profound adaptation measures in the case of unmitigated climate change.
A Prognostic Model for One-year Mortality in Patients Requiring Prolonged Mechanical Ventilation
Carson, Shannon S.; Garrett, Joanne; Hanson, Laura C.; Lanier, Joyce; Govert, Joe; Brake, Mary C.; Landucci, Dante L.; Cox, Christopher E.; Carey, Timothy S.
2009-01-01
Objective A measure that identifies patients who are at high risk of mortality after prolonged ventilation will help physicians communicate prognosis to patients or surrogate decision-makers. Our objective was to develop and validate a prognostic model for 1-year mortality in patients ventilated for 21 days or more. Design Prospective cohort study. Setting University-based tertiary care hospital Patients 300 consecutive medical, surgical, and trauma patients requiring mechanical ventilation for at least 21 days were prospectively enrolled. Measurements and Main Results Predictive variables were measured on day 21 of ventilation for the first 200 patients and entered into logistic regression models with 1-year and 3-month mortality as outcomes. Final models were validated using data from 100 subsequent patients. One-year mortality was 51% in the development set and 58% in the validation set. Independent predictors of mortality included requirement for vasopressors, hemodialysis, platelet count ≤150 ×109/L, and age ≥50. Areas under the ROC curve for the development model and validation model were 0.82 (se 0.03) and 0.82 (se 0.05) respectively. The model had sensitivity of 0.42 (se 0.12) and specificity of 0.99 (se 0.01) for identifying patients who had ≥90% risk of death at 1 year. Observed mortality was highly consistent with both 3- and 12-month predicted mortality. These four predictive variables can be used in a simple prognostic score that clearly identifies low risk patients (no risk factors, 15% mortality) and high risk patients (3 or 4 risk factors, 97% mortality). Conclusions Simple clinical variables measured on day 21 of mechanical ventilation can identify patients at highest and lowest risk of death from prolonged ventilation. PMID:18552692
Hope and General Self-efficacy: Two Measures of the Same Construct?
Zhou, Mingming; Kam, Chester Chun Seng
2016-07-03
The aim of this study was to test the extent to which hope measure is equivalent to general self-efficacy measure. Questionnaire data on these two constructs and other external variables were collected from 199 Chinese college students. The factor analytic results suggested that hope and self-efficacy items measured the same construct. The unidimensional model combining hope items and GSE items fit the data as well as the bidimensional model, indicating that their corresponding items measured the same underlying construct. Further analyses showed that hope and GSE did not correlate with external variables differently in a systematic manner. Most of these correlational differences were non-significant and negligible. These findings suggested that the literatures studying GSE and hope could be considered to be integrated and that researchers need to recognize and acknowledge the conceptual and operational similarities among these constructs in the literature.
Cognitive indicators of social anxiety in youth: a structural equation analysis.
Rudy, Brittany M; Davis, Thompson E; Matthews, Russell A
2014-01-01
Previous studies have demonstrated significant relationships among various cognitive variables such as negative cognition, self-efficacy, and social anxiety. Unfortunately, few studies focus on the role of cognition among youth, and researchers often fail to use domain-specific measures when examining cognitive variables. Therefore, the purpose of the present study was to examine domain-specific cognitive variables (i.e., socially oriented negative self-referent cognition and social self-efficacy) and their relationships to social anxiety in children and adolescents using structural equation modeling techniques. A community sample of children and adolescents (n=245; 55.9% female; 83.3% Caucasian, 9.4% African American, 2% Asian, 2% Hispanic, 2% "other," and 1.2% not reported) completed questionnaires assessing social cognition and social anxiety symptomology. Three latent variables were created to examine the constructs of socially oriented negative self-referent cognition (as measured by the SONAS scale), social self-efficacy (as measured by the SEQSS-C), and social anxiety (as measured by the SPAI-C and the Brief SA). The resulting measurement model of latent variables fit the data well. Additionally, consistent with the study hypothesis, results indicated that social self-efficacy likely mediates the relationship between socially oriented negative self-referent cognition and social anxiety, and socially oriented negative self-referent cognition yields significant direct and indirect effects on social anxiety. These findings indicate that socially oriented negative cognitions are associated with youth's beliefs about social abilities and the experience of social anxiety. Future directions for research and study limitations, including use of cross-sectional data, are discussed. © 2013.
Bielská, Lucie; Hovorková, Ivana; Kuta, Jan; Machát, Jiří; Hofman, Jakub
2017-01-01
Artificial soil (AS) is used in soil ecotoxicology as a test medium or reference matrix. AS is prepared according to standard OECD/ISO protocols and components of local sources are usually used by laboratories. This may result in significant inter-laboratory variations in AS properties and, consequently, in the fate and bioavailability of tested chemicals. In order to reveal the extent and sources of variations, the batch equilibrium method was applied to measure the sorption of 2 model compounds (phenanthrene and cadmium) to 21 artificial soils from different laboratories. The distribution coefficients (K d ) of phenanthrene and cadmium varied over one order of magnitude: from 5.3 to 61.5L/kg for phenanthrene and from 17.9 to 190L/kg for cadmium. Variations in phenanthrene sorption could not be reliably explained by measured soil properties; not even by the total organic carbon (TOC) content which was expected. Cadmium logK d values significantly correlated with cation exchange capacity (CEC), pH H2O and pH KCl , with Pearson correlation coefficients of 0.62, 0.80, and 0.79, respectively. CEC and pH H2O together were able to explain 72% of cadmium logK d variability in the following model: logK d =0.29pH H2O +0.0032 CEC -0.53. Similarly, 66% of cadmium logK d variability could be explained by CEC and pH KCl in the model: logKd=0.27pH KCl +0.0028 CEC -0.23. Variable cadmium sorption in differing ASs could be partially treated with these models. However, considering the unpredictable variability of phenanthrene sorption, a more reliable solution for reducing the variability of ASs from different laboratories would be better harmonization of AS preparation and composition. Copyright © 2016 Elsevier Inc. All rights reserved.
The Measurement of Commitment to Work.
ERIC Educational Resources Information Center
Coombs, Lolagene C.
1979-01-01
This is the report of a study to determine if there is a model for measuring commitment to a job in a job-family trade-off context. Two crucial variables are discussed and a measurement scale is developed for each. (Author/SA)
Toma, Luiza; Stott, Alistair W; Heffernan, Claire; Ringrose, Siân; Gunn, George J
2013-03-01
The paper analyses the impact of a priori determinants of biosecurity behaviour of farmers in Great Britain. We use a dataset collected through a stratified telephone survey of 900 cattle and sheep farmers in Great Britain (400 in England and a further 250 in Wales and Scotland respectively) which took place between 25 March 2010 and 18 June 2010. The survey was stratified by farm type, farm size and region. To test the influence of a priori determinants on biosecurity behaviour we used a behavioural economics method, structural equation modelling (SEM) with observed and latent variables. SEM is a statistical technique for testing and estimating causal relationships amongst variables, some of which may be latent using a combination of statistical data and qualitative causal assumptions. Thirteen latent variables were identified and extracted, expressing the behaviour and the underlying determining factors. The variables were: experience, economic factors, organic certification of farm, membership in a cattle/sheep health scheme, perceived usefulness of biosecurity information sources, knowledge about biosecurity measures, perceived importance of specific biosecurity strategies, perceived effect (on farm business in the past five years) of welfare/health regulation, perceived effect of severe outbreaks of animal diseases, attitudes towards livestock biosecurity, attitudes towards animal welfare, influence on decision to apply biosecurity measures and biosecurity behaviour. The SEM model applied on the Great Britain sample has an adequate fit according to the measures of absolute, incremental and parsimonious fit. The results suggest that farmers' perceived importance of specific biosecurity strategies, organic certification of farm, knowledge about biosecurity measures, attitudes towards animal welfare, perceived usefulness of biosecurity information sources, perceived effect on business during the past five years of severe outbreaks of animal diseases, membership in a cattle/sheep health scheme, attitudes towards livestock biosecurity, influence on decision to apply biosecurity measures, experience and economic factors are significantly influencing behaviour (overall explaining 64% of the variance in behaviour). Three other models were run for the individual regions (England, Scotland and Wales). A smaller number of variables were included in each model to account for the smaller sample sizes. Results show lower but still high levels of variance explained for the individual models (about 40% for each country). The individual models' results are consistent with those of the total sample model. The results might suggest that ways to achieve behavioural change could include ensuring increased access of farmers to biosecurity information and advice sources. Copyright © 2012 Elsevier B.V. All rights reserved.
On the Power of Multivariate Latent Growth Curve Models to Detect Correlated Change
ERIC Educational Resources Information Center
Hertzog, Christopher; Lindenberger, Ulman; Ghisletta, Paolo; Oertzen, Timo von
2006-01-01
We evaluated the statistical power of single-indicator latent growth curve models (LGCMs) to detect correlated change between two variables (covariance of slopes) as a function of sample size, number of longitudinal measurement occasions, and reliability (measurement error variance). Power approximations following the method of Satorra and Saris…
Stability of Teacher Value-Added Rankings across Measurement Model and Scaling Conditions
ERIC Educational Resources Information Center
Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong
2017-01-01
Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…
Multilevel Factor Analysis by Model Segregation: New Applications for Robust Test Statistics
ERIC Educational Resources Information Center
Schweig, Jonathan
2014-01-01
Measures of classroom environments have become central to policy efforts that assess school and teacher quality. This has sparked a wide interest in using multilevel factor analysis to test measurement hypotheses about classroom-level variables. One approach partitions the total covariance matrix and tests models separately on the…
Salt marsh hydrology presents many difficulties from a measurement and modeling standpoint: the bi-directional flows of tidal waters, variable water densities due to mixing of fresh and salt water, significant influences from vegetation, and complex stream morphologies. Because o...
Jensen, Jacob S; Egebo, Max; Meyer, Anne S
2008-05-28
Accomplishment of fast tannin measurements is receiving increased interest as tannins are important for the mouthfeel and color properties of red wines. Fourier transform mid-infrared spectroscopy allows fast measurement of different wine components, but quantification of tannins is difficult due to interferences from spectral responses of other wine components. Four different variable selection tools were investigated for the identification of the most important spectral regions which would allow quantification of tannins from the spectra using partial least-squares regression. The study included the development of a new variable selection tool, iterative backward elimination of changeable size intervals PLS. The spectral regions identified by the different variable selection methods were not identical, but all included two regions (1485-1425 and 1060-995 cm(-1)), which therefore were concluded to be particularly important for tannin quantification. The spectral regions identified from the variable selection methods were used to develop calibration models. All four variable selection methods identified regions that allowed an improved quantitative prediction of tannins (RMSEP = 69-79 mg of CE/L; r = 0.93-0.94) as compared to a calibration model developed using all variables (RMSEP = 115 mg of CE/L; r = 0.87). Only minor differences in the performance of the variable selection methods were observed.
Daniels, Sarah I; Sillé, Fenna C M; Goldbaum, Audrey; Yee, Brenda; Key, Ellen F; Zhang, Luoping; Smith, Martyn T; Thomas, Reuben
2014-12-01
Blood miRNAs are a new promising area of disease research, but variability in miRNA measurements may limit detection of true-positive findings. Here, we measured sources of miRNA variability and determine whether repeated measures can improve power to detect fold-change differences between comparison groups. Blood from healthy volunteers (N = 12) was collected at three time points. The miRNAs were extracted by a method predetermined to give the highest miRNA yield. Nine different miRNAs were quantified using different qPCR assays and analyzed using mixed models to identify sources of variability. A larger number of miRNAs from a publicly available blood miRNA microarray dataset with repeated measures were used for a bootstrapping procedure to investigate effects of repeated measures on power to detect fold changes in miRNA expression for a theoretical case-control study. Technical variability in qPCR replicates was identified as a significant source of variability (P < 0.05) for all nine miRNAs tested. Variability was larger in the TaqMan qPCR assays (SD = 0.15-0.61) versus the qScript qPCR assays (SD = 0.08-0.14). Inter- and intraindividual and extraction variability also contributed significantly for two miRNAs. The bootstrapping procedure demonstrated that repeated measures (20%-50% of N) increased detection of a 2-fold change for approximately 10% to 45% more miRNAs. Statistical power to detect small fold changes in blood miRNAs can be improved by accounting for sources of variability using repeated measures and choosing appropriate methods to minimize variability in miRNA quantification. This study demonstrates the importance of including repeated measures in experimental designs for blood miRNA research. See all the articles in this CEBP Focus section, "Biomarkers, Biospecimens, and New Technologies in Molecular Epidemiology." ©2014 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Kefauver, Shawn C.; Peñuelas, Josep; Ustin, Susan L.
2012-12-01
The impacts of tropospheric ozone on conifer health in the Sierra Nevada of California, USA, and the Pyrenees of Catalonia, Spain, were measured using field assessments and GIS variables of landscape gradients related to plant water relations, stomatal conductance and hence to ozone uptake. Measurements related to ozone injury included visible chlorotic mottling, needle retention, needle length, and crown depth, which together compose the Ozone Injury Index (OII). The OII values observed in Catalonia were similar to those in California, but OII alone correlated poorly to ambient ozone in all sites. Combining ambient ozone with GIS variables related to landscape variability of plant hydrological status, derived from stepwise regressions, produced models with R2 = 0.35, p = 0.016 in Catalonia, R2 = 0.36, p < 0.001 in Yosemite and R2 = 0.33, p = 0.007 in Sequoia/Kings Canyon National Parks in California. Individual OII components in Catalonia were modeled with improved success compared to the original full OII, in particular visible chlorotic mottling (R2 = 0.60, p < 0.001). The results show that ozone is negatively impacting forest health in California and Catalonia and also that modeling ozone injury improves by including GIS variables related to plant water relations.
Troyer, T W; Miller, K D
1997-07-01
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Implementing seasonal carbon allocation into a dynamic vegetation model
NASA Astrophysics Data System (ADS)
Vermeulen, Marleen; Kruijt, Bart; Hickler, Thomas; Forrest, Matthew; Kabat, Pavel
2014-05-01
Long-term measurements of terrestrial fluxes through the FLUXNET Eddy Covariance network have revealed that carbon and water fluxes can be highly variable from year-to-year. This so-called interannual variability (IAV) of ecosystems is not fully understood because a direct relation with environmental drivers cannot always be found. Many dynamic vegetation models allocate NPP to leaves, stems, and root compartments on an annual basis, and thus do not account for seasonal changes in productivity in response to changes in environmental stressors. We introduce this vegetation seasonality into dynamic vegetation model LPJ-GUESS by implementing a new carbon allocation scheme on a daily basis. We focus in particular on modelling the observed flux seasonality of the Amazon basin, and validate our new model against fluxdata and MODIS GPP products. We expect that introducing seasonal variability into the model improves estimates of annual productivity and IAV, and therefore the model's representation of ecosystem carbon budgets as a whole.
Estimation of Particulate Mass and Manganese Exposure Levels among Welders
Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.
2011-01-01
Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928
The role of internal climate variability for interpreting climate change scenarios
NASA Astrophysics Data System (ADS)
Maraun, Douglas
2013-04-01
When communicating information on climate change, the use of multi-model ensembles has been advocated to sample uncertainties over a range as wide as possible. To meet the demand for easily accessible results, the ensemble is often summarised by its multi-model mean signal. In rare cases, additional uncertainty measures are given to avoid loosing all information on the ensemble spread, e.g., the highest and lowest projected values. Such approaches, however, disregard the fundamentally different nature of the different types of uncertainties and might cause wrong interpretations and subsequently wrong decisions for adaptation. Whereas scenario and climate model uncertainties are of epistemic nature, i.e., caused by an in principle reducible lack of knowledge, uncertainties due to internal climate variability are aleatory, i.e., inherently stochastic and irreducible. As wisely stated in the proverb "climate is what you expect, weather is what you get", a specific region will experience one stochastic realisation of the climate system, but never exactly the expected climate change signal as given by a multi model mean. Depending on the meteorological variable, region and lead time, the signal might be strong or weak compared to the stochastic component. In cases of a low signal-to-noise ratio, even if the climate change signal is a well defined trend, no trends or even opposite trends might be experienced. Here I propose to use the time of emergence (TOE) to quantify and communicate when climate change trends will exceed the internal variability. The TOE provides a useful measure for end users to assess the time horizon for implementing adaptation measures. Furthermore, internal variability is scale dependent - the more local the scale, the stronger the influence of internal climate variability. Thus investigating the TOE as a function of spatial scale could help to assess the required spatial scale for implementing adaptation measures. I exemplify this proposal with a recently published study on the TOE for mean and heavy precipitation trends in Europe. In some regions trends emerge only late in the 21st century or even later, suggesting that in these regions adaptation to internal variability rather than to climate change is required. Yet in other regions the climate change signal is strong, urging for timely adaptation. Douglas Maraun, When at what scale will trends in European mean and heavy precipitation emerge? Env. Res. Lett., in press, 2013.
Baldwin, Austin K.; Graczyk, David J.; Robertson, Dale M.; Saad, David A.; Magruder, Christopher
2012-01-01
The models to estimate chloride concentrations all used specific conductance as the explanatory variable, except for the model for the Little Menomonee River near Freistadt, which used both specific conductance and turbidity as explanatory variables. Adjusted R2 values for the chloride models ranged from 0.74 to 0.97. Models to estimate total suspended solids and total phosphorus used turbidity as the only explanatory variable. Adjusted R2 values ranged from 0.77 to 0.94 for the total suspended solids models and from 0.55 to 0.75 for the total phosphorus models. Models to estimate indicator bacteria used water temperature and turbidity as the explanatory variables, with adjusted R2 values from 0.54 to 0.69 for Escherichia coli bacteria models and from 0.54 to 0.74 for fecal coliform bacteria models. Dissolved oxygen was not used in any of the final models. These models may help managers measure the effects of land-use changes and improvement projects, establish total maximum daily loads, estimate important water-quality indicators such as bacteria concentrations, and enable informed decision making in the future.
Modeling of cytometry data in logarithmic space: When is a bimodal distribution not bimodal?
Erez, Amir; Vogel, Robert; Mugler, Andrew; Belmonte, Andrew; Altan-Bonnet, Grégoire
2018-02-16
Recent efforts in systems immunology lead researchers to build quantitative models of cell activation and differentiation. One goal is to account for the distributions of proteins from single-cell measurements by flow cytometry or mass cytometry as readout of biological regulation. In that context, large cell-to-cell variability is often observed in biological quantities. We show here that these readouts, viewed in logarithmic scale may result in two easily-distinguishable modes, while the underlying distribution (in linear scale) is unimodal. We introduce a simple mathematical test to highlight this mismatch. We then dissect the flow of influence of cell-to-cell variability proposing a graphical model which motivates higher-dimensional analysis of the data. Finally we show how acquiring additional biological information can be used to reduce uncertainty introduced by cell-to-cell variability, helping to clarify whether the data is uni- or bimodal. This communication has cautionary implications for manual and automatic gating strategies, as well as clustering and modeling of single-cell measurements. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.
NASA Astrophysics Data System (ADS)
Lakshmi, K.; Rama Mohan Rao, A.
2014-10-01
In this paper, a novel output-only damage-detection technique based on time-series models for structural health monitoring in the presence of environmental variability and measurement noise is presented. The large amount of data obtained in the form of time-history response is transformed using principal component analysis, in order to reduce the data size and thereby improve the computational efficiency of the proposed algorithm. The time instant of damage is obtained by fitting the acceleration time-history data from the structure using autoregressive (AR) and AR with exogenous inputs time-series prediction models. The probability density functions (PDFs) of damage features obtained from the variances of prediction errors corresponding to references and healthy current data are found to be shifting from each other due to the presence of various uncertainties such as environmental variability and measurement noise. Control limits using novelty index are obtained using the distances of the peaks of the PDF curves in healthy condition and used later for determining the current condition of the structure. Numerical simulation studies have been carried out using a simply supported beam and also validated using an experimental benchmark data corresponding to a three-storey-framed bookshelf structure proposed by Los Alamos National Laboratory. Studies carried out in this paper clearly indicate the efficiency of the proposed algorithm for damage detection in the presence of measurement noise and environmental variability.
Lim, Won Hee; Park, Eun Woo; Chae, Hwa Sung; Kwon, Soon Man; Jung, Hoi-In; Baek, Seung-Hak
2017-06-01
The purpose of this study was to compare the results of two- (2D) and three-dimensional (3D) measurements for the alveolar molding effect in patients with unilateral cleft lip and palate. The sample consisted of 23 unilateral cleft lip and palate infants treated with nasoalveolar molding (NAM) appliance. Dental models were fabricated at initial visit (T0; mean age, 23.5 days after birth) and after alveolar molding therapy (T1; mean duration, 83 days). For 3D measurement, virtual models were constructed using a laser scanner and 3D software. For 2D measurement, 1:1 ratio photograph images of dental models were scanned by a scanner. After setting of common reference points and lines for 2D and 3D measurements, 7 linear and 5 angular variables were measured at the T0 and T1 stages, respectively. Wilcoxon signed rank test and Bland-Altman analysis were performed for statistical analysis. The alveolar molding effect of the maxilla following NAM treatment was inward bending of the anterior part of greater segment, forward growth of the lesser segment, and decrease in the cleft gap in the greater segment and lesser segment. Two angular variables showed difference in statistical interpretation of the change by NAM treatment between 2D and 3D measurements (ΔACG-BG-PG and ΔACL-BL-PL). However, Bland-Altman analysis did not exhibit significant difference in the amounts of change in these variables between the 2 measurements. These results suggest that the data from 2D measurement could be reliably used in conjunction with that from 3D measurement.
Innovation Motivation and Artistic Creativity
ERIC Educational Resources Information Center
Joy, Stephen P.
2005-01-01
Innovation motivation is a social learning model of originality comprising two variables: the need to be different and innovation expectancy. This study examined their contribution to artistic creativity in a sample of undergraduates. Participants completed measures of both innovation motivation variables as well as intelligence, adjustment, and…
Comment on 'Amplification of endpoint structure for new particle mass measurement at the LHC'
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barr, A. J.; Gwenlan, C.; Young, C. J. S.
2011-06-01
We present a comment on the kinematic variable m{sub CT2} recently proposed in Won Sang Cho, Jihn E. Kim, and Ji-Hun Kim, Phys. Rev. D 81, 095010 (2010). The variable is designed to be applied to models such as R-parity conserving supersymmetry (SUSY) when there is pair production of new heavy particles each of which decays to a single massless visible and a massive invisible component. It was proposed by Cho, Kim, and Kim that a measurement of the peak of the m{sub CT2} distribution could be used to precisely constrain the masses of the SUSY particles. We show that,more » for the an example characterized by direct squark decays, when standard model backgrounds are included in simulations, the sensitivity of the m{sub CT2} variable to the SUSY particle masses is more seriously impacted for m{sub CT2} than for other previously proposed variables.« less
Vehicle-specific emissions modeling based upon on-road measurements.
Frey, H Christopher; Zhang, Kaishan; Rouphail, Nagui M
2010-05-01
Vehicle-specific microscale fuel use and emissions rate models are developed based upon real-world hot-stabilized tailpipe measurements made using a portable emissions measurement system. Consecutive averaging periods of one to three multiples of the response time are used to compare two semiempirical physically based modeling schemes. One scheme is based on internally observable variables (IOVs), such as engine speed and manifold absolute pressure, while the other is based on externally observable variables (EOVs), such as speed, acceleration, and road grade. For NO, HC, and CO emission rates, the average R(2) ranged from 0.41 to 0.66 for the former and from 0.17 to 0.30 for the latter. The EOV models have R(2) for CO(2) of 0.43 to 0.79 versus 0.99 for the IOV models. The models are sensitive to episodic events in driving cycles such as high acceleration. Intervehicle and fleet average modeling approaches are compared; the former account for microscale variations that might be useful for some types of assessments. EOV-based models have practical value for traffic management or simulation applications since IOVs usually are not available or not used for emission estimation.
A random walk model for evaluating clinical trials involving serial observations.
Hopper, J L; Young, G P
1988-05-01
For clinical trials where the variable of interest is ordered and categorical (for example, disease severity, symptom scale), and where measurements are taken at intervals, it might be possible to achieve a greater discrimination between the efficacy of treatments by modelling each patient's progress as a stochastic process. The random walk is a simple, easily interpreted model that can be fitted by maximum likelihood using a maximization routine with inference based on standard likelihood theory. In general the model can allow for randomly censored data, incorporates measured prognostic factors, and inference is conditional on the (possibly non-random) allocation of patients. Tests of fit and of model assumptions are proposed, and application to two therapeutic trials of gastroenterological disorders are presented. The model gave measures of the rate of, and variability in, improvement for patients under different treatments. A small simulation study suggested that the model is more powerful than considering the difference between initial and final scores, even when applied to data generated by a mechanism other than the random walk model assumed in the analysis. It thus provides a useful additional statistical method for evaluating clinical trials.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Eickmeier, Justin
Acoustical oceanography is one way to study the ocean, its internal layers, boundaries and all processes occurring within using underwater acoustics. Acoustical sensing techniques allows for the measurement of ocean processes from within that logistically or financially preclude traditional in-situ measurements. Acoustic signals propagate as pressure wavefronts from a source to a receiver through an ocean medium with variable physical parameters. The water column physical parameters that change acoustic wave propagation in the ocean include temperature, salinity, current, surface roughness, seafloor bathymetry, and vertical stratification over variable time scales. The impacts of short-time scale water column variability on acoustic wave propagation include coherent and incoherent surface reflections, wavefront arrival time delay, focusing or defocusing of the intensity of acoustic beams and refraction of acoustic rays. This study focuses on high-frequency broadband acoustic waves, and examines the influence of short-time scale water column variability on broadband high-frequency acoustics, wavefronts, from 7 to 28 kHz, in shallow water. Short-time scale variability is on the order of seconds to hours and the short-spatial scale variability is on the order of few centimeters. Experimental results were collected during an acoustic experiment along 100 m isobaths and data analysis was conducted using available acoustic wave propagation models. Three main topics are studied to show that acoustic waves are viable as a remote sensing tool to measure oceanographic parameters in shallow water. First, coherent surface reflections forming striation patterns, from multipath receptions, through rough surface interaction of broadband acoustic signals with the dynamic sea surface are analyzed. Matched filtered results of received acoustic waves are compared with a ray tracing numerical model using a sea surface boundary generated from measured water wave spectra at the time of signal propagation. It is determined that on a time scale of seconds, corresponding to typical periods of surface water waves, the arrival time of reflected acoustic signals from surface waves appear as striation patterns in measured data and can be accurately modelled by ray tracing. Second, changes in acoustic beam arrival angle and acoustic ray path influenced by isotherm depth oscillations are analyzed using an 8-element delay-sum beamformer. The results are compared with outputs from a two-dimensional (2-D) parabolic equation (PE) model using measured sound speed profiles (SSPs) in the water column. Using the method of beamforming on the received signal, the arrival time and angle of an acoustic beam was obtained for measured acoustic signals. It is determined that the acoustic ray path, acoustic beam intensity and angular spread are a function of vertical isotherm oscillations on a time scale of minutes and can be modeled accurately by a 2-D PE model. Third, a forward problem is introduced which uses acoustic wavefronts received on a vertical line array, 1.48 km from the source, in the lower part of the water column to infer range dependence or independence in the SSP. The matched filtering results of received acoustic wavefronts at all hydrophone depths are compared with a ray tracing routine augmented to calculate only direct path and bottom reflected signals. It is determined that the SSP range dependence can be inferred on a time scale of hours using an array of hydrophones spanning the water column. Sound speed profiles in the acoustic field were found to be range independent for 11 of the 23 hours in the measurements. A SSP cumulative reconstruction process, conducted from the seafloor to the sea surface, layer-by-layer, identifies critical segments in the SSP that define the ray path, arrival time and boundary interactions. Data-model comparison between matched filtered arrival time spread and arrival time output from the ray tracing was robust when the SSP measured at the receiver was input to the model. When the SSP measured nearest the source (at the same instant in time) was input to the ray tracing model, the data-model comparison was poor. It was determined that the cumulative sound speed change in the SSP near the source was 1.041 m/s greater than that of the SSP at the receiver and resulted in the poor data-model comparison. In this study, the influences on broadband acoustic wave propagation in the frequency range of 7 to 28 kHz of spatial and temporal changes in the oceanography of shallow water regions are addressed. Acoustic waves can be used as remote sensing tools to measure oceanographic parameters in shallow water and data-model comparison results show a direct relationship between the oceanographic variations and acoustic wave propagations.
Kinetics of phase transformation in glass forming systems
NASA Technical Reports Server (NTRS)
Ray, Chandra S.
1994-01-01
The objectives of this research were to (1) develop computer models for realistic simulations of nucleation and crystal growth in glasses, which would also have the flexibility to accomodate the different variables related to sample characteristics and experimental conditions, and (2) design and perform nucleation and crystallization experiments using calorimetric measurements, such as differential scanning calorimetry (DSC) and differential thermal analysis (DTA) to verify these models. The variables related to sample characteristics mentioned in (1) above include size of the glass particles, nucleating agents, and the relative concentration of the surface and internal nuclei. A change in any of these variables changes the mode of the transformation (crystallization) kinetics. A variation in experimental conditions includes isothermal and nonisothermal DSC/DTA measurements. This research would lead to develop improved, more realistic methods for analysis of the DSC/DTA peak profiles to determine the kinetic parameters for nucleation and crystal growth as well as to assess the relative merits and demerits of the thermoanalytical models presently used to study the phase transformation in glasses.
NASA Astrophysics Data System (ADS)
Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi
2015-04-01
The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.
Investigating the Complexity of NGC 2992 with HETG
NASA Astrophysics Data System (ADS)
Canizares, Claude
2009-09-01
NGC 2992 is a nearby (z = 0.00771) Seyfert galaxy with a variable 1.5-2 classification. Over the past 30 years, the 2-10 keV continuum flux has varied by a factor of ~20. This was accompanied by complex variability in the multi-component Fe K line emission, which may indicate violent flaring activity in the innermost regions of the accretion disk. By observing NGC 2992 with the HETG, we will obtain the best constraint to date on the FWHM of the narrow, distant-matter Fe K line emission, along with precision measurement of its centroid energy, thereby enabling more accurate modeling of the variable broad component. We will also test models of the soft excess through measurement of narrow absorption lines attributable to a warm absorber and narrow emission lines arising from photoexcitation.
Prevalence Odds Ratio versus Prevalence Ratio: Choice Comes with Consequences
Tamhane, Ashutosh R; Westfall, Andrew O; Burkholder, Greer A; Cutter, Gary R
2016-01-01
Odds ratio (OR), risk ratio (RR), and prevalence ratio (PR) are some of the measures of association which are often reported in research studies quantifying the relationship between an independent variable and the outcome of interest. There has been much debate on the issue of which measure is appropriate to report depending on the study design. However, the literature on selecting a particular category of the outcome to be modeled and/or change in reference group for categorical independent variables and the effect on statistical significance, although known, is scantly discussed nor published with examples. In this article, we provide an example of a cross-sectional study wherein PR was chosen over (Prevalence) OR and demonstrate the analytic implications of the choice of category to be modeled and choice of reference level for independent variables. PMID:27460748
Jones, C Jessie; Rutledge, Dana N; Aquino, Jordan
2010-07-01
The purposes of this study were to determine whether people with and without fibromyalgia (FM) age 50 yr and above showed differences in physical performance and perceived functional ability and to determine whether age, gender, depression, and physical activity level altered the impact of FM status on these factors. Dependent variables included perceived function and 6 performance measures (multidimensional balance, aerobic endurance, overall functional mobility, lower body strength, and gait velocity-normal or fast). Independent (predictor) variables were FM status, age, gender, depression, and physical activity level. Results indicated significant differences between adults with and without FM on all physical-performance measures and perceived function. Linear-regression models showed that the contribution of significant predictors was in expected directions. All regression models were significant, accounting for 16-65% of variance in the dependent variables.
NASA Technical Reports Server (NTRS)
Musick, H. Brad
1993-01-01
The objectives of this research are: to develop and test predictive relations for the quantitative influence of vegetation canopy structure on wind erosion of semiarid rangeland soils, and to develop remote sensing methods for measuring the canopy structural parameters that determine sheltering against wind erosion. The influence of canopy structure on wind erosion will be investigated by means of wind-tunnel and field experiments using structural variables identified by the wind-tunnel and field experiments using model roughness elements to simulate plant canopies. The canopy structural variables identified by the wind-tunnel and field experiments as important in determining vegetative sheltering against wind erosion will then be measured at a number of naturally vegetated field sites and compared with estimates of these variables derived from analysis of remotely sensed data.
[Measurement of Water COD Based on UV-Vis Spectroscopy Technology].
Wang, Xiao-ming; Zhang, Hai-liang; Luo, Wei; Liu, Xue-mei
2016-01-01
Ultraviolet/visible (UV/Vis) spectroscopy technology was used to measure water COD. A total of 135 water samples were collected from Zhejiang province. Raw spectra with 3 different pretreatment methods (Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV) and 1st Derivatives were compared to determine the optimal pretreatment method for analysis. Spectral variable selection is an important strategy in spectrum modeling analysis, because it tends to parsimonious data representation and can lead to multivariate models with better performance. In order to simply calibration models, the preprocessed spectra were then used to select sensitive wavelengths by competitive adaptive reweighted sampling (CARS), Random frog and Successive Genetic Algorithm (GA) methods. Different numbers of sensitive wavelengths were selected by different variable selection methods with SNV preprocessing method. Partial least squares (PLS) was used to build models with the full spectra, and Extreme Learning Machine (ELM) was applied to build models with the selected wavelength variables. The overall results showed that ELM model performed better than PLS model, and the ELM model with the selected wavelengths based on CARS obtained the best results with the determination coefficient (R2), RMSEP and RPD were 0.82, 14.48 and 2.34 for prediction set. The results indicated that it was feasible to use UV/Vis with characteristic wavelengths which were obtained by CARS variable selection method, combined with ELM calibration could apply for the rapid and accurate determination of COD in aquaculture water. Moreover, this study laid the foundation for further implementation of online analysis of aquaculture water and rapid determination of other water quality parameters.
NASA Astrophysics Data System (ADS)
Bonura, A.; Capizzo, M. C.; Fazio, C.; Guastella, I.
2008-05-01
In this paper we present a pedagogic approach aimed at modeling electric conduction in semiconductors, built by using NetLogo, a programmable modeling environment for building and exploring multi-agent systems. `Virtual experiments' are implemented to confront predictions of different microscopic models with real measurements of electric properties of matter, such as resistivity. The relations between these electric properties and other physical variables, like temperature, are, then, analyzed.
High frequency sonar variability in littoral environments: Irregular particles and bubbles
NASA Astrophysics Data System (ADS)
Richards, Simon D.; Leighton, Timothy G.; White, Paul R.
2002-11-01
Littoral environments may be characterized by high concentrations of suspended particles. Such suspensions contribute to attenuation through visco-inertial absorption and scattering and may therefore be partially responsible for the observed variability in high frequency sonar performance in littoral environments. Microbubbles which are prevalent in littoral waters also contribute to volume attenuation through radiation, viscous and thermal damping and cause dispersion. The attenuation due to a polydisperse suspension of particles with depth-dependent concentration has been included in a sonar model. The effects of a depth-dependent, polydisperse population of microbubbles on attenuation, sound speed and volume reverberation are also included. Marine suspensions are characterized by nonspherical particles, often plate-like clay particles. Measurements of absorption in dilute suspensions of nonspherical particles have shown disagreement with predictions of spherical particle models. These measurements have been reanalyzed using three techniques for particle sizing: laser diffraction, gravitational sedimentation, and centrifugal sedimentation, highlighting the difficulty of characterizing polydisperse suspensions of irregular particles. The measurements have been compared with predictions of a model for suspensions of oblate spheroids. Excellent agreement is obtained between this model and the measurements for kaolin particles, without requiring any a priori knowledge of the measurements.
Unidimensional factor models imply weaker partial correlations than zero-order correlations.
van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J
2018-06-01
In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.
NASA Astrophysics Data System (ADS)
Horvath, Sarah; Myers, Sam; Ahlers, Johnathon; Barnes, Jason W.
2017-10-01
Stellar seismic activity produces variations in brightness that introduce oscillations into transit light curves, which can create challenges for traditional fitting models. These oscillations disrupt baseline stellar flux values and potentially mask transits. We develop a model that removes these oscillations from transit light curves by minimizing the significance of each oscillation in frequency space. By removing stellar variability, we prepare each light curve for traditional fitting techniques. We apply our model to $\\delta$-Scuti KOI-976 and demonstrate that our variability subtraction routine successfully allows for measuring bulk system characteristics using traditional light curve fitting. These results open a new window for characterizing bulk system parameters of planets orbiting seismically active stars.
Saraswat, Prabhav; MacWilliams, Bruce A; Davis, Roy B; D'Astous, Jacques L
2013-01-01
Several multisegment foot models have been proposed and some have been used to study foot pathologies. These models have been tested and validated on typically developed populations; however application of such models to feet with significant deformities presents an additional set of challenges. For the first time, in this study, a multisegment foot model is tested for repeatability in a population of children with symptomatic abnormal feet. The results from this population are compared to the same metrics collected from an age matched (8-14 years) typically developing population. The modified Shriners Hospitals for Children, Greenville (mSHCG) foot model was applied to ten typically developing children and eleven children with planovalgus feet by two clinicians. Five subjects in each group were retested by both clinicians after 4-6 weeks. Both intra-clinician and inter-clinician repeatability were evaluated using static and dynamic measures. A plaster mold method was used to quantify variability arising from marker placement error. Dynamic variability was measured by examining trial differences from the same subjects when multiple clinicians carried out the data collection multiple times. For hindfoot and forefoot angles, static and dynamic variability in both groups was found to be less than 4° and 6° respectively. The mSHCG model strategy of minimal reliance on anatomical markers for dynamic measures and inherent flexibility enabled by separate anatomical and technical coordinate systems resulted in a model equally repeatable in typically developing and planovalgus populations. Copyright © 2012 Elsevier B.V. All rights reserved.
Variable-intercept panel model for deformation zoning of a super-high arch dam.
Shi, Zhongwen; Gu, Chongshi; Qin, Dong
2016-01-01
This study determines dam deformation similarity indexes based on an analysis of deformation zoning features and panel data clustering theory, with comprehensive consideration to the actual deformation law of super-high arch dams and the spatial-temporal features of dam deformation. Measurement methods of these indexes are studied. Based on the established deformation similarity criteria, the principle used to determine the number of dam deformation zones is constructed through entropy weight method. This study proposes the deformation zoning method for super-high arch dams and the implementation steps, analyzes the effect of special influencing factors of different dam zones on the deformation, introduces dummy variables that represent the special effect of dam deformation, and establishes a variable-intercept panel model for deformation zoning of super-high arch dams. Based on different patterns of the special effect in the variable-intercept panel model, two panel analysis models were established to monitor fixed and random effects of dam deformation. Hausman test method of model selection and model effectiveness assessment method are discussed. Finally, the effectiveness of established models is verified through a case study.
A Protective Factors Model for Alcohol Abuse and Suicide Prevention among Alaska Native Youth
Allen, James; Mohatt, Gerald V.; Fok, Carlotta Ching Ting; Henry, David; Burkett, Rebekah
2014-01-01
This study provides an empirical test of a culturally grounded theoretical model for prevention of alcohol abuse and suicide risk with Alaska Native youth, using a promising set of culturally appropriate measures for the study of the process of change and outcome. This model is derived from qualitative work that generated an heuristic model of protective factors from alcohol (Allen at al., 2006; Mohatt, Hazel et al., 2004; Mohatt, Rasmus et al., 2004). Participants included 413 rural Alaska Native youth ages 12-18 who assisted in testing a predictive model of Reasons for Life and Reflective Processes about alcohol abuse consequences as co-occurring outcomes. Specific individual, family, peer, and community level protective factor variables predicted these outcomes. Results suggest prominent roles for these predictor variables as intermediate prevention strategy target variables in a theoretical model for a multilevel intervention. The model guides understanding of underlying change processes in an intervention to increase the ultimate outcome variables of Reasons for Life and Reflective Processes regarding the consequences of alcohol abuse. PMID:24952249
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916
Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen
2014-01-01
It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.
NASA Astrophysics Data System (ADS)
Shirley, S.; Watts, J. D.; Kimball, J. S.; Zhang, Z.; Poulter, B.; Klene, A. E.; Jones, L. A.; Kim, Y.; Oechel, W. C.; Zona, D.; Euskirchen, E. S.
2017-12-01
A warming Arctic climate is contributing to shifts in landscape moisture and temperature regimes, a shortening of the non-frozen season, and increases in the depth of annual active layer. The changing environmental conditions make it difficult to determine whether tundra ecosystems are a carbon sink or source. At present, eddy covariance flux towers and biophysical measurements within the tower footprint provide the most direct assessment of change to the tundra carbon balance. However, these measurements have a limited spatial footprint and exist over relatively short timescales. Thus, terrestrial ecosystem models are needed to provide an improved understanding of how changes in landscape environmental conditions impact regional carbon fluxes. This study examines the primary drivers thought to affect the magnitude and variability of tundra-atmosphere CO2 and CH4 fluxes over the Alaska North Slope. Also investigated is the ability of biophysical models to capture seasonal flux characteristics over the 9 tundra tower sites examined. First, we apply a regression tree approach to ascertain which remotely sensed environmental variables best explain observed variability in the tower fluxes. Next, we compare flux estimates obtained from multiple process models including Terrestrial Carbon Flux (TCF) and the Lund-Potsdam-Jena Wald Schnee und Landschaft (LPJ-wsl), and Soil Moisture Active Passive Level 4 Carbon (SMAP L4_C) products. Our results indicate that out of 7 variables examined vegetation greenness, temperature, and moisture are more significant predictors of carbon flux magnitude over the tundra tower sites. This study found that satellite data-driven models, due to the ability of remote sensing instruments to capture the physical principles and processes driving tundra carbon flux, are more effective at estimating the magnitude and spatiotemporal variability of CO2 and CH4 fluxes in northern high latitude ecosystems.
Evaluation of anti-migration properties of biliary covered self-expandable metal stents
Minaga, Kosuke; Kitano, Masayuki; Imai, Hajime; Harwani, Yogesh; Yamao, Kentaro; Kamata, Ken; Miyata, Takeshi; Omoto, Shunsuke; Kadosaka, Kumpei; Sakurai, Toshiharu; Nishida, Naoshi; Kudo, Masatoshi
2016-01-01
AIM: To assess anti-migration potential of six biliary covered self-expandable metal stents (C-SEMSs) by using a newly designed phantom model. METHODS: In the phantom model, the stent was placed in differently sized holes in a silicone wall and retracted with a retraction robot. Resistance force to migration (RFM) was measured by a force gauge on the stent end. Radial force (RF) was measured with a RF measurement machine. Measured flare structure variables were the outer diameter, height, and taper angle of the flare (ODF, HF, and TAF, respectively). Correlations between RFM and RF or flare variables were analyzed using a linear correlated model. RESULTS: Out of the six stents, five stents were braided, the other was laser-cut. The RF and RFM of each stent were expressed as the average of five replicate measurements. For all six stents, RFM and RF decreased as the hole diameter increased. For all six stents, RFM and RF correlated strongly when the stent had not fully expanded. This correlation was not observed in the five braided stents excluding the laser cut stent. For all six stents, there was a strong correlation between RFM and TAF when the stent fully expanded. For the five braided stents, RFM after full stent expansion correlated strongly with all three stent flare structure variables (ODF, HF, and TAF). The laser-cut C-SEMS had higher RFMs than the braided C-SEMSs regardless of expansion state. CONCLUSION: RF was an important anti-migration property when the C-SEMS did not fully expand. Once fully expanded, stent flare structure variables plays an important role in anti-migration. PMID:27570427
The topographic distribution of annual incoming solar radiation in the Rio Grande River basin
NASA Technical Reports Server (NTRS)
Dubayah, R.; Van Katwijk, V.
1992-01-01
We model the annual incoming solar radiation topoclimatology for the Rio Grande River basin in Colorado, U.S.A. Hourly pyranometer measurements are combined with satellite reflectance data and 30-m digital elevation models within a topographic solar radiation algorithm. Our results show that there is large spatial variability within the basin, even at an annual integration length, but the annual, basin-wide mean is close to that measured by the pyranometers. The variance within 16 sq km and 100 sq km regions is a linear function of the average slope in the region, suggesting a possible parameterization for sub-grid-cell variability.
Rain attenuation measurements: Variability and data quality assessment
NASA Technical Reports Server (NTRS)
Crane, Robert K.
1989-01-01
Year to year variations in the cumulative distributions of rain rate or rain attenuation are evident in any of the published measurements for a single propagation path that span a period of several years of observation. These variations must be described by models for the prediction of rain attenuation statistics. Now that a large measurement data base has been assembled by the International Radio Consultative Committee, the information needed to assess variability is available. On the basis of 252 sample cumulative distribution functions for the occurrence of attenuation by rain, the expected year to year variation in attenuation at a fixed probability level in the 0.1 to 0.001 percent of a year range is estimated to be 27 percent. The expected deviation from an attenuation model prediction for a single year of observations is estimated to exceed 33 percent when any of the available global rain climate model are employed to estimate the rain rate statistics. The probability distribution for the variation in attenuation or rain rate at a fixed fraction of a year is lognormal. The lognormal behavior of the variate was used to compile the statistics for variability.
A novel model incorporating two variability sources for describing motor evoked potentials
Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.
2014-01-01
Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287
Wang, Ching-Yun; Song, Xiao
2016-11-01
Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women's Health Initiative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Van Hertem, T; Maltz, E; Antler, A; Romanini, C E B; Viazzi, S; Bahr, C; Schlageter-Tello, A; Lokhorst, C; Berckmans, D; Halachmi, I
2013-07-01
The objective of this study was to develop and validate a mathematical model to detect clinical lameness based on existing sensor data that relate to the behavior and performance of cows in a commercial dairy farm. Identification of lame (44) and not lame (74) cows in the database was done based on the farm's daily herd health reports. All cows were equipped with a behavior sensor that measured neck activity and ruminating time. The cow's performance was measured with a milk yield meter in the milking parlor. In total, 38 model input variables were constructed from the sensor data comprising absolute values, relative values, daily standard deviations, slope coefficients, daytime and nighttime periods, variables related to individual temperament, and milk session-related variables. A lame group, cows recognized and treated for lameness, to not lame group comparison of daily data was done. Correlations between the dichotomous output variable (lame or not lame) and the model input variables were made. The highest correlation coefficient was obtained for the milk yield variable (rMY=0.45). In addition, a logistic regression model was developed based on the 7 highest correlated model input variables (the daily milk yield 4d before diagnosis; the slope coefficient of the daily milk yield 4d before diagnosis; the nighttime to daytime neck activity ratio 6d before diagnosis; the milk yield week difference ratio 4d before diagnosis; the milk yield week difference 4d before diagnosis; the neck activity level during the daytime 7d before diagnosis; the ruminating time during nighttime 6d before diagnosis). After a 10-fold cross-validation, the model obtained a sensitivity of 0.89 and a specificity of 0.85, with a correct classification rate of 0.86 when based on the averaged 10-fold model coefficients. This study demonstrates that existing farm data initially used for other purposes, such as heat detection, can be exploited for the automated detection of clinically lame animals on a daily basis as well. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Shanafield, Margaret; Niswonger, Richard G.; Prudic, David E.; Pohll, Greg; Susfalk, Richard; Panday, Sorab
2014-01-01
Infiltration along ephemeral channels plays an important role in groundwater recharge in arid regions. A model is presented for estimating spatial variability of seepage due to streambed heterogeneity along channels based on measurements of streamflow-front velocities in initially dry channels. The diffusion-wave approximation to the Saint-Venant equations, coupled with Philip's equation for infiltration, is connected to the groundwater model MODFLOW and is calibrated by adjusting the saturated hydraulic conductivity of the channel bed. The model is applied to portions of two large water delivery canals, which serve as proxies for natural ephemeral streams. Estimated seepage rates compare well with previously published values. Possible sources of error stem from uncertainty in Manning's roughness coefficients, soil hydraulic properties and channel geometry. Model performance would be most improved through more frequent longitudinal estimates of channel geometry and thalweg elevation, and with measurements of stream stage over time to constrain wave timing and shape. This model is a potentially valuable tool for estimating spatial variability in longitudinal seepage along intermittent and ephemeral channels over a wide range of bed slopes and the influence of seepage rates on groundwater levels.
NASA Astrophysics Data System (ADS)
Setyaningsih, S.
2017-01-01
The main element to build a leading university requires lecturer commitment in a professional manner. Commitment is measured through willpower, loyalty, pride, loyalty, and integrity as a professional lecturer. A total of 135 from 337 university lecturers were sampled to collect data. Data were analyzed using validity and reliability test and multiple linear regression. Many studies have found a link on the commitment of lecturers, but the basic cause of the causal relationship is generally neglected. These results indicate that the professional commitment of lecturers affected by variables empowerment, academic culture, and trust. The relationship model between variables is composed of three substructures. The first substructure consists of endogenous variables professional commitment and exogenous three variables, namely the academic culture, empowerment and trust, as well as residue variable ɛ y . The second substructure consists of one endogenous variable that is trust and two exogenous variables, namely empowerment and academic culture and the residue variable ɛ 3. The third substructure consists of one endogenous variable, namely the academic culture and exogenous variables, namely empowerment as well as residue variable ɛ 2. Multiple linear regression was used in the path model for each substructure. The results showed that the hypothesis has been proved and these findings provide empirical evidence that increasing the variables will have an impact on increasing the professional commitment of the lecturers.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
Queri, S; Konrad, M; Keller, K
2012-08-01
Increasing stress-associated health problems in Germany often are attributed to problems on the job, in particular to rising work demands. The study includes several stress predictors from other results and from literature in one predictive model for the field of work of "psychiatric rehabilitation".A cross-sectional design was used to measure personal and organizational variables with quantitative standard questionnaires as self-ratings from n=243 pedagogically active employees from various professions. Overall stress and job stress were measured with different instruments.The sample showed above-average overall stress scores along with below-average job stress scores. The multivariate predictive model for explaining the heightened stress shows pathogenetic and salutogenetic main effects for organizational variables such as "gratification crisis" and personal variables such as "occupational self-efficacy expectations" as well as an interaction of both types of variables. There are relevant gender-specific results concerning empathy and differences between professions concerning the extent of occupational self-efficacy.The results are a matter of particular interest for the practice of workplace health promotion as well as for social work schools, the main group in our sample being social workers. © Georg Thieme Verlag KG Stuttgart · New York.
Douziech, Mélanie; Conesa, Irene Rosique; Benítez-López, Ana; Franco, Antonio; Huijbregts, Mark; van Zelm, Rosalie
2018-01-24
Large variations in removal efficiencies (REs) of chemicals have been reported for monitoring studies of activated sludge wastewater treatment plants (WWTPs). In this work, we conducted a meta-analysis on REs (1539 data points) for a set of 209 chemicals consisting of fragrances, surfactants, and pharmaceuticals in order to assess the drivers of the variability relating to inherent properties of the chemicals and operational parameters of activated sludge WWTPs. For a reduced dataset (n = 542), we developed a mixed-effect model (meta-regression) to explore the observed variability in REs for the chemicals using three chemical specific factors and four WWTP-related parameters. The overall removal efficiency of the set of chemicals was 82.1% (95% CI 75.2-87.1%, N = 1539). Our model accounted for 17% of the total variability in REs, while the process-based model SimpleTreat did not perform better than the average of the measured REs. We identified that, after accounting for other factors potentially influencing RE, readily biodegradable compounds were better removed than non-readily biodegradable ones. Further, we showed that REs increased with increasing sludge retention times (SRTs), especially for non-readily biodegradable compounds. Finally, our model highlighted a decrease in RE with increasing K OC . The counterintuitive relationship to K OC stresses the need for a better understanding of electrochemical interactions influencing the RE of ionisable chemicals. In addition, we highlighted the need to improve the modelling of chemicals that undergo deconjugation when predicting RE. Our meta-analysis represents a first step in better explaining the observed variability in measured REs of chemicals. It can be of particular help to prioritize the improvements required in existing process-based models to predict removal efficiencies of chemicals in WWTPs.
Negash, Selam; Wilson, Robert S.; Leurgans, Sue E.; Wolk, David A.; Schneider, Julie A.; Buchman, Aron S.; Bennett, David A.; Arnold, Steven. E.
2014-01-01
Background Although it is now evident that normal cognition can occur despite significant AD pathology, few studies have attempted to characterize this discordance, or examine factors that may contribute to resilient brain aging in the setting of AD pathology. Methods More than 2,000 older persons underwent annual evaluation as part of participation in the Religious Orders Study or Rush Memory Aging Project. A total of 966 subjects who had brain autopsy and comprehensive cognitive testing proximate to death were analyzed. Resilience was quantified as a continuous measure using linear regression modeling, where global cognition was entered as a dependent variable and global pathology was an independent variable. Studentized residuals generated from the model represented the discordance between cognition and pathology, and served as measure of resilience. The relation of resilience index to known risk factors for AD and related variables was examined. Results Multivariate regression models that adjusted for demographic variables revealed significant associations for early life socioeconomic status, reading ability, APOE-ε4 status, and past cognitive activity. A stepwise regression model retained reading level (estimate = 0.10, SE = 0.02; p < 0.0001) and past cognitive activity (estimate = 0.27, SE = 0.09; p = 0.002), suggesting the potential mediating role of these variables for resilience. Conclusions The construct of resilient brain aging can provide a framework for quantifying the discordance between cognition and pathology, and help identify factors that may mediate this relationship. PMID:23919768
Szymańska, Agnieszka; Dobrenko, Kamila; Grzesiuk, Lidia
2017-08-29
The study concerns the relationship between three groups of variables presenting the patient's perspective: (1) "patient's characteristics" before psychotherapy, including "expectations of the therapy"; (2) "experience in the therapy", including the "psychotherapeutic relationship"; and (3) "assessment of the direct effectiveness of the psychotherapy". Data from the literature are the basis for predicting relationships between all of these variables. Measurement of the variables was conducted using a follow-up survey. The survey was sent to a total of 1,210 former patients of the Academic Center for Psychotherapy (AOP) in which the therapy is conducted mainly with the students and employees of the University of Warsaw. Responses were received from 276 people. 55% of the respondents were women and 45% were men, under 30 years of age. The analyses were performed using structural equations. Two models emerged from an analysis of the relationship between the three above-mentioned groups of variables. One concerns the relationship between (1) the patient's characteristics (2) the course of psychotherapy, in which -from the perspective of the patient - there is a good relationship with the psychotherapist and (3) psychotherapy is effective. The second model refers to (2) the patient's experience of poor psychotherapeutic relationship and (3) ineffective psychotherapy. Patient's expectations of the psychotherapy (especially "the expectation of support") proved to be important moderating variablesin the models-among the characteristics of the patient. The mathematical model also revealed strong correlation of variables measuring "the relationship with the psychotherapist" and "therapeutic interventions".
Krecl, Patricia; Johansson, Christer; Ström, Johan
2010-03-01
Residential wood combustion (RWC) is responsible for 33% of the total carbon mass emitted in Europe. With the new European targets to increase the use of renewable energy, there is a growing concern that the population exposure to woodsmoke will also increase. This study investigates observed and simulated light-absorbing carbon mass (MLAC) concentrations in a residential neighborhood (Lycksele, Sweden) where RWC is a major air pollution source during winter. The measurement analysis included descriptive statistics, correlation coefficient, coefficient of divergence, linear regression, concentration roses, diurnal pattern, and weekend versus weekday concentration ratios. Hourly RWC and road traffic contributions to MLAC were simulated with a Gaussian dispersion model to assess whether the model was able to mimic the observations. Hourly mean and standard deviation concentrations measured at six sites ranged from 0.58 to 0.74 microg m(-3) and from 0.59 to 0.79 microg m(-3), respectively. The temporal and spatial variability decreased with increasing averaging time. Low-wind periods with relatively high MLAC concentrations correlated more strongly than high-wind periods with low concentrations. On average, the model overestimated the observations by 3- to 5-fold and explained less than 10% of the measured hourly variability at all sites. Large residual concentrations were associated with weak winds and relatively high MLAC loadings. The explanation of the observed variability increased to 31-45% when daily mean concentrations were compared. When the contribution from the boilers within the neighborhood was excluded from the simulations, the model overestimation decreased to 16-71%. When assessing the exposure to light-absorbing carbon particles using this type of model, the authors suggest using a longer averaging period (i.e., daily concentrations) in a larger area with an updated and very detailed emission inventory.
Adaptive data-driven models for estimating carbon fluxes in the Northern Great Plains
Wylie, B.K.; Fosnight, E.A.; Gilmanov, T.G.; Frank, A.B.; Morgan, J.A.; Haferkamp, Marshall R.; Meyers, T.P.
2007-01-01
Rangeland carbon fluxes are highly variable in both space and time. Given the expansive areas of rangelands, how rangelands respond to climatic variation, management, and soil potential is important to understanding carbon dynamics. Rangeland carbon fluxes associated with Net Ecosystem Exchange (NEE) were measured from multiple year data sets at five flux tower locations in the Northern Great Plains. These flux tower measurements were combined with 1-km2 spatial data sets of Photosynthetically Active Radiation (PAR), Normalized Difference Vegetation Index (NDVI), temperature, precipitation, seasonal NDVI metrics, and soil characteristics. Flux tower measurements were used to train and select variables for a rule-based piece-wise regression model. The accuracy and stability of the model were assessed through random cross-validation and cross-validation by site and year. Estimates of NEE were produced for each 10-day period during each growing season from 1998 to 2001. Growing season carbon flux estimates were combined with winter flux estimates to derive and map annual estimates of NEE. The rule-based piece-wise regression model is a dynamic, adaptive model that captures the relationships of the spatial data to NEE as conditions evolve throughout the growing season. The carbon dynamics in the Northern Great Plains proved to be in near equilibrium, serving as a small carbon sink in 1999 and as a small carbon source in 1998, 2000, and 2001. Patterns of carbon sinks and sources are very complex, with the carbon dynamics tilting toward sources in the drier west and toward sinks in the east and near the mountains in the extreme west. Significant local variability exists, which initial investigations suggest are likely related to local climate variability, soil properties, and management.
Determining Spatial Variability in PM2.5 Source Impacts across Detroit, MI
Intra-urban variability in air pollution source impacts was investigated using receptor modeling of daily speciated PM2.5 measurements collected at residential outdoor locations across Detroit, MI (Wayne County) as part of the Detroit Exposure and Aerosol Research Stud...
Estimating lake-atmosphere CO2 exchange
Anderson, D.E.; Striegl, Robert G.; Stannard, D.I.; Michmerhuizen, C.M.; McConnaughey, T.A.; LaBaugh, J.W.
1999-01-01
Lake-atmosphere CO2 flux was directly measured above a small, woodland lake using the eddy covariance technique and compared with fluxes deduced from changes in measured lake-water CO2 storage and with flux predictions from boundary-layer and surface-renewal models. Over a 3-yr period, lake-atmosphere exchanges of CO2 were measured over 5 weeks in spring, summer, and fall. Observed springtime CO2 efflux was large (2.3-2.7 ??mol m-2 s-1) immediately after lake-thaw. That efflux decreased exponentially with time to less than 0.2 ??mol m-2 s-1 within 2 weeks. Substantial interannual variability was found in the magnitudes of springtime efflux, surface water CO2 concentrations, lake CO2 storage, and meteorological conditions. Summertime measurements show a weak diurnal trend with a small average downward flux (-0.17 ??mol m-2 s-1) to the lake's surface, while late fall flux was trendless and smaller (-0.0021 ??mol m-2 s-1). Large springtime efflux afforded an opportunity to make direct measurement of lake-atmosphere fluxes well above the detection limits of eddy covariance instruments, facilitating the testing of different gas flux methodologies and air-water gas-transfer models. Although there was an overall agreement in fluxes determined by eddy covariance and those calculated from lake-water storage change in CO2, agreement was inconsistent between eddy covariance flux measurements and fluxes predicted by boundary-layer and surface-renewal models. Comparison of measured and modeled transfer velocities for CO2, along with measured and modeled cumulative CO2 flux, indicates that in most instances the surface-renewal model underpredicts actual flux. Greater underestimates were found with comparisons involving homogeneous boundary-layer models. No physical mechanism responsible for the inconsistencies was identified by analyzing coincidentally measured environmental variables.
NASA Astrophysics Data System (ADS)
Laska, K.; Prosek, P.; Budik, L.; Budikova, M.
2009-04-01
The results of global solar and erythemally effective ultraviolet (EUV) radiation measurements are presented. The radiation data were collected within the period of 2006-2007 at the Czech Antarctic station J. G. Mendel, James Ross Island (63°48'S, 57°53'W). Global solar radiation was measured by a Kipp&Zonen CM11 pyranometer. EUV radiation was measured according to the McKinley and Diffey Erythemal Action Spectrum with a Solar Light broadband UV-Biometer Model 501A. The effects of stratospheric ozone concentration and cloudiness (estimated as cloud impact factor from global solar radiation) on the intensity of incident EUV radiation were calculated by a non-linear regression model. The total ozone content (TOC) and cloud/surface reflectivity derived from satellite-based measurements were applied into the model for elimination of the uncertainties in measured ozone values. There were two input data of TOC used in the model. The first were taken from the Dobson spectrophotometer measurements (Argentinean Antarctic station Marambio), the second was acquired for geographical coordinates of the Mendel Station from the EOS Aura Ozone Monitoring Instrument and V8.5 algorithm. Analysis of measured EUV data showed that variable cloudiness affected rather short-term fluctuations of the radiation fluxes, while ozone declines caused long-term UV radiation increase in the second half of the year. The model predicted about 98 % variability of the measured EUV radiation. The residuals between measured and modeled EUV radiation intensities were evaluated separately for the above-specified two TOC datasets, parts of seasons and cloud impact factor (cloudiness). The mean average prediction error was used for model validation according to the cloud impact factor and satellite-based reflectivity data.
NASA Technical Reports Server (NTRS)
Fast, Kelly E.; Kostiuk, Theodor; Lefevre, Frank; Hewagama, Tilak; Livengood, Timothy A.; Delgado, Juan D.; Annen, John; Sonnabend, Guido
2009-01-01
Ozone is a tracer of photochemistry in the atmosphere of Mars and an observable used to test predictions of photochemical models. We present a comparison of retrieved ozone abundances on Mars using ground-based infrared heterodyne measurements by NASA Goddard Space Flight Center's Heterodyne Instrument for Planetary Wind And Composition (HIPWAC) and space-based Mars Express Spectroscopy for the Investigation of the Characteristics of the Atmosphere of Mars (SPICAM) ultraviolet measurements. Ozone retrievals from simultaneous measurements in February 2008 were very consistent (0.8 microns-atm), as were measurements made close in time (ranging from less than 1 to greater than 8 microns-atm) during this period and during opportunities in October 2006 and February 2007. The consistency of retrievals from the two different observational techniques supports combining the measurements for testing photochemistry-coupled general circulation models and for investigating variability over the long-term between spacecraft missions. Quantitative comparison with ground-based measurements by NASA'GSFC's Infrared Heterodyne Spectrometer (IRHS) in 1993 reveals 2-4 times more ozone at low latitudes than in 2008 at the same season, and such variability was not evident over the shorter period of the Mars Express mission. This variability may be due to cloud activity.
Andersen, Claus E; Raaschou-Nielsen, Ole; Andersen, Helle Primdal; Lind, Morten; Gravesen, Peter; Thomsen, Birthe L; Ulbak, Kaare
2007-01-01
A linear regression model has been developed for the prediction of indoor (222)Rn in Danish houses. The model provides proxy radon concentrations for about 21,000 houses in a Danish case-control study on the possible association between residential radon and childhood cancer (primarily leukaemia). The model was calibrated against radon measurements in 3116 houses. An independent dataset with 788 house measurements was used for model performance assessment. The model includes nine explanatory variables, of which the most important ones are house type and geology. All explanatory variables are available from central databases. The model was fitted to log-transformed radon concentrations and it has an R(2) of 40%. The uncertainty associated with individual predictions of (untransformed) radon concentrations is about a factor of 2.0 (one standard deviation). The comparison with the independent test data shows that the model makes sound predictions and that errors of radon predictions are only weakly correlated with the estimates themselves (R(2) = 10%).
NASA Astrophysics Data System (ADS)
Peltola, Olli; Raivonen, Maarit; Li, Xuefei; Vesala, Timo
2018-02-01
Emission via bubbling, i.e. ebullition, is one of the main methane (CH4) emission pathways from wetlands to the atmosphere. Direct measurement of gas bubble formation, growth and release in the peat-water matrix is challenging and in consequence these processes are relatively unknown and are coarsely represented in current wetland CH4 emission models. In this study we aimed to evaluate three ebullition modelling approaches and their effect on model performance. This was achieved by implementing the three approaches in one process-based CH4 emission model. All the approaches were based on some kind of threshold: either on CH4 pore water concentration (ECT), pressure (EPT) or free-phase gas volume (EBG) threshold. The model was run using 4 years of data from a boreal sedge fen and the results were compared with eddy covariance measurements of CH4 fluxes.
Modelled annual CH4 emissions were largely unaffected by the different ebullition modelling approaches; however, temporal variability in CH4 emissions varied an order of magnitude between the approaches. Hence the ebullition modelling approach drives the temporal variability in modelled CH4 emissions and therefore significantly impacts, for instance, high-frequency (daily scale) model comparison and calibration against measurements. The modelling approach based on the most recent knowledge of the ebullition process (volume threshold, EBG) agreed the best with the measured fluxes (R2 = 0.63) and hence produced the most reasonable results, although there was a scale mismatch between the measurements (ecosystem scale with heterogeneous ebullition locations) and model results (single horizontally homogeneous peat column). The approach should be favoured over the two other more widely used ebullition modelling approaches and researchers are encouraged to implement it into their CH4 emission models.
Burns, Angus; Dowling, Adam H; Garvey, Thérèse M; Fleming, Garry J P
2014-10-01
To investigate the inter-examiner variability of contact point displacement measurements (used to calculate the overall Little's Irregularity Index (LII) score) from digital models of the maxillary arch by four independent examiners. Maxillary orthodontic pre-treatment study models of ten patients were scanned using the Lava(tm) Chairside Oral Scanner (LCOS) and 3D digital models were created using Creo(®) computer aided design (CAD) software. Four independent examiners measured the contact point displacements of the anterior maxillary teeth using the software. Measurements were recorded randomly on three separate occasions by the examiners and the measurements (n=600) obtained were analysed using correlation analyses and analyses of variance (ANOVA). LII contact point displacement measurements for the maxillary arch were reproducible for inter-examiner assessment when using the digital method and were highly correlated between examiner pairs for contact point displacement measurements >2mm. The digital measurement technique showed poor correlation for smaller contact point displacement measurements (<2mm) for repeated measurements. The coefficient of variation (CoV) of the digital contact point displacement measurements highlighted 348 of the 600 measurements differed by more than 20% of the mean compared with 516 of 600 for the same measurements performed using the conventional LII measurement technique. Although the inter-examiner variability of LII contact point displacement measurements on the maxillary arch was reduced using the digital compared with the conventional LII measurement methodology, neither method was considered appropriate for orthodontic research purposes particularly when measuring small contact point displacements. Copyright © 2014 Elsevier Ltd. All rights reserved.
Introduction to the use of regression models in epidemiology.
Bender, Ralf
2009-01-01
Regression modeling is one of the most important statistical techniques used in analytical epidemiology. By means of regression models the effect of one or several explanatory variables (e.g., exposures, subject characteristics, risk factors) on a response variable such as mortality or cancer can be investigated. From multiple regression models, adjusted effect estimates can be obtained that take the effect of potential confounders into account. Regression methods can be applied in all epidemiologic study designs so that they represent a universal tool for data analysis in epidemiology. Different kinds of regression models have been developed in dependence on the measurement scale of the response variable and the study design. The most important methods are linear regression for continuous outcomes, logistic regression for binary outcomes, Cox regression for time-to-event data, and Poisson regression for frequencies and rates. This chapter provides a nontechnical introduction to these regression models with illustrating examples from cancer research.
Identifying Bearing Rotodynamic Coefficients Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Miller, Brad A.; Howard, Samuel A.
2008-01-01
An Extended Kalman Filter is developed to estimate the linearized direct and indirect stiffness and damping force coefficients for bearings in rotor dynamic applications from noisy measurements of the shaft displacement in response to imbalance and impact excitation. The bearing properties are modeled as stochastic random variables using a Gauss-Markov model. Noise terms are introduced into the system model to account for all of the estimation error, including modeling errors and uncertainties and the propagation of measurement errors into the parameter estimates. The system model contains two user-defined parameters that can be tuned to improve the filter's performance; these parameters correspond to the covariance of the system and measurement noise variables. The filter is also strongly influenced by the initial values of the states and the error covariance matrix. The filter is demonstrated using numerically simulated data for a rotor bearing system with two identical bearings, which reduces the number of unknown linear dynamic coefficients to eight. The filter estimates for the direct damping coefficients and all four stiffness coefficients correlated well with actual values, whereas the estimates for the cross-coupled damping coefficients were the least accurate.
Ozone Lidar Observations for Air Quality Studies
NASA Technical Reports Server (NTRS)
Wang, Lihua; Newchurch, Mike; Kuang, Shi; Burris, John F.; Huang, Guanyu; Pour-Biazar, Arastoo; Koshak, William; Follette-Cook, Melanie B.; Pickering, Kenneth E.; McGee, Thomas J.;
2015-01-01
Tropospheric ozone lidars are well suited to measuring the high spatio-temporal variability of this important trace gas. Furthermore, lidar measurements in conjunction with balloon soundings, aircraft, and satellite observations provide substantial information about a variety of atmospheric chemical and physical processes. Examples of processes elucidated by ozone-lidar measurements are presented, and modeling studies using WRF-Chem, RAQMS, and DALES/LES models illustrate our current understanding and shortcomings of these processes.
Solar Control of Earth's Ionosphere: Observations from Solar Cycle 23
NASA Astrophysics Data System (ADS)
Doe, R. A.; Thayer, J. P.; Solomon, S. C.
2005-05-01
A nine year database of sunlit E-region electron density altitude profiles (Ne(z)) measured by the Sondrestrom ISR has been partitioned over a 30-bin parameter space of averaged 10.7 cm solar radio flux (F10.7) and solar zenith angle (χ) to investigate long-term solar and thermospheric variability, and to validate contemporary EUV photoionization models. A two stage filter, based on rejection of Ne(z) profiles with large Hall to Pedersen ratio, is used to minimize auroral contamination. Resultant filtered mean Ne(z) compares favorably with subauroral Ne measured for the same F10.7 and χ conditions at the Millstone Hill ISR. Mean Ne, as expected, increases with solar activity and decreases with large χ, and the variance around mean Ne is shown to be greatest at low F10.7 (solar minimum). ISR-derived mean Ne is compared with two EUV models: (1) a simple model without photoelectrons and based on the 5 -- 105 nm EUVAC model solar flux [Richards et al., 1994] and (2) the GLOW model [Solomon et al., 1988; Solomon and Abreu, 1989] suitably modified for inclusion of XUV spectral components and photoelectron flux. Across parameter space and for all altitudes, Model 2 provides a closer match to ISR mean Ne and suggests that the photoelectron and XUV enhancements are essential to replicate measured plasma densities below 150 km. Simulated Ne variance envelopes, given by perturbing the Model 2 neutral atmosphere input by the measured extremum in Ap, F10.7, and Te, are much narrower than ISR-derived geophysical variance envelopes. We thus conclude that long-term variability of the EUV spectra dominates over thermospheric variability and that EUV spectral variability is greatest at solar minimum. ISR -- model comparison also provides evidence for the emergence of an H (Lyman β) Ne feature at solar maximum. Richards, P. G., J. A. Fennelly, and D. G. Torr, EUVAC: A solar EUV flux model for aeronomic calculations, J. Geophys. Res., 99, 8981, 1994. Solomon, S. C., P. B. Hays, and V. J. Abreu, The auroral 6300 Å emission: Observations and Modeling, J. Geophys. Res., 93, 9867, 1988. Solomon, S. C. and V. J. Abreu, The 630 nm dayglow, J. Geophys. Res., 94, 6817, 1989.
Predictive modeling and reducing cyclic variability in autoignition engines
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-08-30
Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.
An attempt at estimating Paris area CO2 emissions from atmospheric concentration measurements
NASA Astrophysics Data System (ADS)
Bréon, F. M.; Broquet, G.; Puygrenier, V.; Chevallier, F.; Xueref-Remy, I.; Ramonet, M.; Dieudonné, E.; Lopez, M.; Schmidt, M.; Perrussel, O.; Ciais, P.
2015-02-01
Atmospheric concentration measurements are used to adjust the daily to monthly budget of fossil fuel CO2 emissions of the Paris urban area from the prior estimates established by the Airparif local air quality agency. Five atmospheric monitoring sites are available, including one at the top of the Eiffel Tower. The atmospheric inversion is based on a Bayesian approach, and relies on an atmospheric transport model with a spatial resolution of 2 km with boundary conditions from a global coarse grid transport model. The inversion adjusts prior knowledge about the anthropogenic and biogenic CO2 fluxes from the Airparif inventory and an ecosystem model, respectively, with corrections at a temporal resolution of 6 h, while keeping the spatial distribution from the emission inventory. These corrections are based on assumptions regarding the temporal autocorrelation of prior emissions uncertainties within the daily cycle, and from day to day. The comparison of the measurements against the atmospheric transport simulation driven by the a priori CO2 surface fluxes shows significant differences upwind of the Paris urban area, which suggests a large and uncertain contribution from distant sources and sinks to the CO2 concentration variability. This contribution advocates that the inversion should aim at minimising model-data misfits in upwind-downwind gradients rather than misfits in mole fractions at individual sites. Another conclusion of the direct model-measurement comparison is that the CO2 variability at the top of the Eiffel Tower is large and poorly represented by the model for most wind speeds and directions. The model's inability to reproduce the CO2 variability at the heart of the city makes such measurements ill-suited for the inversion. This and the need to constrain the budgets for the whole city suggests the assimilation of upwind-downwind mole fraction gradients between sites at the edge of the urban area only. The inversion significantly improves the agreement between measured and modelled concentration gradients. Realistic emissions are retrieved for two 30-day periods and suggest a significant overestimate by the AirParif inventory. Similar inversions over longer periods are necessary for a proper evaluation of the optimised CO2 emissions against independent data.
Castedo-Dorado, Fernando; Hevia, Andrea; Vega, José Antonio; Vega-Nieva, Daniel; Ruiz-González, Ana Daría
2017-01-01
The fuel complex variables canopy bulk density and canopy base height are often used to predict crown fire initiation and spread. Direct measurement of these variables is impractical, and they are usually estimated indirectly by modelling. Recent advances in predicting crown fire behaviour require accurate estimates of the complete vertical distribution of canopy fuels. The objectives of the present study were to model the vertical profile of available canopy fuel in pine stands by using data from the Spanish national forest inventory plus low-density airborne laser scanning (ALS) metrics. In a first step, the vertical distribution of the canopy fuel load was modelled using the Weibull probability density function. In a second step, two different systems of models were fitted to estimate the canopy variables defining the vertical distributions; the first system related these variables to stand variables obtained in a field inventory, and the second system related the canopy variables to airborne laser scanning metrics. The models of each system were fitted simultaneously to compensate the effects of the inherent cross-model correlation between the canopy variables. Heteroscedasticity was also analyzed, but no correction in the fitting process was necessary. The estimated canopy fuel load profiles from field variables explained 84% and 86% of the variation in canopy fuel load for maritime pine and radiata pine respectively; whereas the estimated canopy fuel load profiles from ALS metrics explained 52% and 49% of the variation for the same species. The proposed models can be used to assess the effectiveness of different forest management alternatives for reducing crown fire hazard. PMID:28448524
Spatial variability of E. coli in an urban salt-wedge estuary.
Jovanovic, Dusan; Coleman, Rhys; Deletic, Ana; McCarthy, David
2017-01-15
This study investigated the spatial variability of a common faecal indicator organism, Escherichia coli, in an urban salt-wedge estuary in Melbourne, Australia. Data were collected through comprehensive depth profiling in the water column at four sites and included measurements of temperature, salinity, pH, dissolved oxygen, turbidity, and E. coli concentrations. Vertical variability of E. coli was closely related to the salt-wedge dynamics; in the presence of a salt-wedge, there was a significant decrease in E. coli concentrations with depth. Transverse variability was low and was most likely dwarfed by the analytical uncertainties of E. coli measurements. Longitudinal variability was also low, potentially reflecting minimal die-off, settling, and additional inputs entering along the estuary. These results were supported by a simple mixing model that predicted E. coli concentrations based on salinity measurements. Additionally, an assessment of a sentinel monitoring station suggested routine monitoring locations may produce conservative estimates of E. coli concentrations in stratified estuaries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Calus, M P L; de Haas, Y; Veerkamp, R F
2013-10-01
Genomic selection holds the promise to be particularly beneficial for traits that are difficult or expensive to measure, such that access to phenotypes on large daughter groups of bulls is limited. Instead, cow reference populations can be generated, potentially supplemented with existing information from the same or (highly) correlated traits available on bull reference populations. The objective of this study, therefore, was to develop a model to perform genomic predictions and genome-wide association studies based on a combined cow and bull reference data set, with the accuracy of the phenotypes differing between the cow and bull genomic selection reference populations. The developed bivariate Bayesian stochastic search variable selection model allowed for an unbalanced design by imputing residuals in the residual updating scheme for all missing records. The performance of this model is demonstrated on a real data example, where the analyzed trait, being milk fat or protein yield, was either measured only on a cow or a bull reference population, or recorded on both. Our results were that the developed bivariate Bayesian stochastic search variable selection model was able to analyze 2 traits, even though animals had measurements on only 1 of 2 traits. The Bayesian stochastic search variable selection model yielded consistently higher accuracy for fat yield compared with a model without variable selection, both for the univariate and bivariate analyses, whereas the accuracy of both models was very similar for protein yield. The bivariate model identified several additional quantitative trait loci peaks compared with the single-trait models on either trait. In addition, the bivariate models showed a marginal increase in accuracy of genomic predictions for the cow traits (0.01-0.05), although a greater increase in accuracy is expected as the size of the bull population increases. Our results emphasize that the chosen value of priors in Bayesian genomic prediction models are especially important in small data sets. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
How well do the GCMs replicate the historical precipitation variability in the Colorado River Basin?
NASA Astrophysics Data System (ADS)
Guentchev, G.; Barsugli, J. J.; Eischeid, J.; Raff, D. A.; Brekke, L.
2009-12-01
Observed precipitation variability measures are compared to measures obtained using the World Climate Research Programme (WCRP) Coupled Model Intercomparison Project (CMIP3) General Circulation Models (GCM) data from 36 model projections downscaled by Brekke at al. (2007) and 30 model projections downscaled by Jon Eischeid. Three groups of variability measures are considered in this historical period (1951-1999) comparison: a) basic variability measures, such as standard deviation, interdecadal standard deviation; b) exceedance probability values, i.e., 10% (extreme wet years) and 90% (extreme dry years) exceedance probability values of series of n-year running mean annual amounts, where n=1,12; 10% exceedance probability values of annual maximum monthly precipitation (extreme wet months); and c) runs variability measures, e.g., frequency of negative and positive runs of annual precipitation amounts, total number of the negative and positive runs. Two gridded precipitation data sets produced from observations are used: the Maurer et al. (2002) and the Daly et al. (1994) Precipitation Regression on Independent Slopes Method (PRISM) data sets. The data consist of monthly grid-point precipitation averaged on a United States Geological Survey (USGS) hydrological sub-region scale. The statistical significance of the obtained model minus observed measure differences is assessed using a block bootstrapping approach. The analyses were performed on annual, seasonal and monthly scale. The results indicate that the interdecadal standard deviation is underestimated, in general, on all time scales by the downscaled model data. The differences are statistically significant at a 0.05 significance level for several Lower Colorado Basin sub-regions on annual and seasonal scale, and for several sub-regions located mostly in the Upper Colorado River Basin for the months of March, June, July and November. Although the models simulate drier extreme wet years, wetter extreme dry years and drier extreme wet months for the Upper Colorado basin, the differences are mostly not-significant. Exceptions are the results about the extreme wet years for n=3 for sub-region White-Yampa, for n=6, 7, and 8 for sub-region Upper Colorado-Dolores, and about the extreme dry years for n=11 for sub-region Great Divide-Upper Green. None of the results for the sub-regions in the Lower Colorado Basin were significant. For most of the Upper Colorado sub-regions the models simulate significantly lower frequency of negative and positive 4-6 year runs, while for several sub-regions a significantly higher frequency of 2-year negative runs is evident in the model versus the Maurer data comparisons. The model projections versus the PRISM data comparison reveals similar results for the negative runs, while for the positive runs the results indicate that the models simulate higher frequency of the 2-6 year runs. The results for the Lower Colorado basin sub-regions are similar, in general, to these for the Upper Colorado sub-regions. The differences between the simulated and the observed total number of negative or positive runs were not significant for almost all of the sub-regions within the Colorado River Basin.
Fischer, A; Friggens, N C; Berry, D P; Faverdin, P
2018-07-01
The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.
Multilevel Modeling with Correlated Effects
ERIC Educational Resources Information Center
Kim, Jee-Seon; Frees, Edward W.
2007-01-01
When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…
Maps and models of density and stiffness within individual Douglas-fir trees
Christine L. Todoroki; Eini C. Lowell; Dennis P. Dykstra; David G. Briggs
2012-01-01
Spatial maps of density and stiffness patterns within individual trees were developed using two methods: (1) measured wood properties of veneer sheets; and (2) mixed effects models, to test the hypothesis that within-tree patterns could be predicted from easily measurable tree variables (height, taper, breast-height diameter, and acoustic velocity). Sample trees...
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
AIC identifies optimal representation of longitudinal dietary variables.
VanBuren, John; Cavanaugh, Joseph; Marshall, Teresa; Warren, John; Levy, Steven M
2017-09-01
The Akaike Information Criterion (AIC) is a well-known tool for variable selection in multivariable modeling as well as a tool to help identify the optimal representation of explanatory variables. However, it has been discussed infrequently in the dental literature. The purpose of this paper is to demonstrate the use of AIC in determining the optimal representation of dietary variables in a longitudinal dental study. The Iowa Fluoride Study enrolled children at birth and dental examinations were conducted at ages 5, 9, 13, and 17. Decayed or filled surfaces (DFS) trend clusters were created based on age 13 DFS counts and age 13-17 DFS increments. Dietary intake data (water, milk, 100 percent-juice, and sugar sweetened beverages) were collected semiannually using a food frequency questionnaire. Multinomial logistic regression models were fit to predict DFS cluster membership (n=344). Multiple approaches could be used to represent the dietary data including averaging across all collected surveys or over different shorter time periods to capture age-specific trends or using the individual time points of dietary data. AIC helped identify the optimal representation. Averaging data for all four dietary variables for the whole period from age 9.0 to 17.0 provided a better representation in the multivariable full model (AIC=745.0) compared to other methods assessed in full models (AICs=750.6 for age 9 and 9-13 increment dietary measurements and AIC=762.3 for age 9, 13, and 17 individual measurements). The results illustrate that AIC can help researchers identify the optimal way to summarize information for inclusion in a statistical model. The method presented here can be used by researchers performing statistical modeling in dental research. This method provides an alternative approach for assessing the propriety of variable representation to significance-based procedures, which could potentially lead to improved research in the dental community. © 2017 American Association of Public Health Dentistry.
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
Improving the Representation of Land in Climate Models by Application of EOS Observations
NASA Technical Reports Server (NTRS)
2004-01-01
The PI's IDS current and previous investigation has focused on the applications of the land data toward the improvement of climate models. The previous IDS research identified the key factors limiting the accuracy of climate models to be the representation of albedos, land cover, fraction of landscape covered by vegetation, roughness lengths, surface skin temperature and canopy properties such as leaf area index (LAI) and average stomatal conductance. Therefore, we assembled a team uniquely situated to focus on these key variables and incorporate the remotely sensed measures of these variables into the next generation of climate models.
Davey, Gareth
2006-01-01
A methodological difficulty facing welfare research on nonhuman animals in the zoo is the large number of uncontrolled variables due to variation within and between study sites. Zoo visitors act as uncontrolled variables, with number, density, size, and behavior constantly changing. This is worrisome because previous research linked visitor variables to animal behavioral changes indicative of stress. There are implications for research design: Studies not accounting for visitors' effect on animal welfare risk confounding (visitor) variables distorting their findings. Zoos need methods to measure and minimize effects of visitor behavior and to ensure that there are no hidden variables in research models. This article identifies a previously unreported variable--hourly variation (decrease) in visitor interest--that may impinge on animal welfare and validates a methodology for measuring it. That visitor interest wanes across the course of the day has important implications for animal welfare management; visitor effects on animal welfare are likely to occur, or intensify, during the morning or in earlier visits when visitor interest is greatest. This article discusses this issue and possible solutions to reduce visitor effects on animal well-being.
Weigel, B.M.; Robertson, Dale M.
2007-01-01
We sampled 41 sites on 34 nonwadeable rivers that represent the types of rivers in Wisconsin, and the kinds and intensities of nutrient and other anthropogenic stressors upon each river type. Sites covered much of United States Environmental Protection Agency national nutrient ecoregions VII-Mostly Glaciated Dairy Region, and VIII-Nutrient Poor, Largely Glaciated upper Midwest. Fish, macroinvertebrates, and three categories of environmental variables including nutrients, other water chemistry, and watershed features were collected using standard protocols. We summarized fish assemblages by index of biotic integrity (IBI) and its 10 component measures, and macroinvertebrates by 2 organic pollution tolerance and 12 proportional richness measures. All biotic and environmental variables represented a wide range of conditions, with biotic measures ranging from poor to excellent status, despite nutrient concentrations being consistently higher than reference concentrations reported for the regions. Regression tree analyses of nutrients on a suite of biotic measures identified breakpoints in total phosphorus (~0.06 mg/l) and total nitrogen (~0.64 mg/l) concentrations at which biotic assemblages were consistently impaired. Redundancy analyses (RDA) were used to identify the most important variables within each of the three environmental variable categories, which were then used to determine the relative influence of each variable category on the biota. Nutrient measures, suspended chlorophyll a, water clarity, and watershed land cover type (forest or row-crop agriculture) were the most important variables and they explained significant amounts of variation within the macroinvertebrate (R 2 = 60.6%) and fish (R 2 = 43.6%) assemblages. The environmental variables selected in the macroinvertebrate model were correlated to such an extent that partial RDA analyses could not attribute variation explained to individual environmental categories, assigning 89% of the explained variation to interactions among the categories. In contrast, partial RDA attributed much of the explained variation to the nutrient (25%) and other water chemistry (38%) categories for the fish model. Our analyses suggest that it would be beneficial to develop criteria based upon a suite of biotic and nutrient variables simultaneously to deem waters as not meeting their designated uses. ?? 2007 Springer Science+Business Media, LLC.
Weigel, Brian M; Robertson, Dale M
2007-10-01
We sampled 41 sites on 34 nonwadeable rivers that represent the types of rivers in Wisconsin, and the kinds and intensities of nutrient and other anthropogenic stressors upon each river type. Sites covered much of United States Environmental Protection Agency national nutrient ecoregions VII--Mostly Glaciated Dairy Region, and VIII--Nutrient Poor, Largely Glaciated upper Midwest. Fish, macroinvertebrates, and three categories of environmental variables including nutrients, other water chemistry, and watershed features were collected using standard protocols. We summarized fish assemblages by index of biotic integrity (IBI) and its 10 component measures, and macroinvertebrates by 2 organic pollution tolerance and 12 proportional richness measures. All biotic and environmental variables represented a wide range of conditions, with biotic measures ranging from poor to excellent status, despite nutrient concentrations being consistently higher than reference concentrations reported for the regions. Regression tree analyses of nutrients on a suite of biotic measures identified breakpoints in total phosphorus (approximately 0.06 mg/l) and total nitrogen (approximately 0.64 mg/l) concentrations at which biotic assemblages were consistently impaired. Redundancy analyses (RDA) were used to identify the most important variables within each of the three environmental variable categories, which were then used to determine the relative influence of each variable category on the biota. Nutrient measures, suspended chlorophyll a, water clarity, and watershed land cover type (forest or row-crop agriculture) were the most important variables and they explained significant amounts of variation within the macroinvertebrate (R(2) = 60.6%) and fish (R(2) = 43.6%) assemblages. The environmental variables selected in the macroinvertebrate model were correlated to such an extent that partial RDA analyses could not attribute variation explained to individual environmental categories, assigning 89% of the explained variation to interactions among the categories. In contrast, partial RDA attributed much of the explained variation to the nutrient (25%) and other water chemistry (38%) categories for the fish model. Our analyses suggest that it would be beneficial to develop criteria based upon a suite of biotic and nutrient variables simultaneously to deem waters as not meeting their designated uses.
Nielsen, Simon; Wilms, L Inge
2014-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
Egli, Simone C; Beck, Irene R; Berres, Manfred; Foldi, Nancy S; Monsch, Andreas U; Sollberger, Marc
2014-10-01
It is unclear whether the predictive strength of established cognitive variables for progression to Alzheimer's disease (AD) dementia from mild cognitive impairment (MCI) varies depending on time to conversion. We investigated which cognitive variables were best predictors, and which of these variables remained predictive for patients with longer times to conversion. Seventy-five participants with MCI were assessed on measures of learning, memory, language, and executive function. Relative predictive strengths of these measures were analyzed using Cox regression models. Measures of word-list position-namely, serial position scores-together with Short Delay Free Recall of word-list learning best predicted conversion to AD dementia. However, only serial position scores predicted those participants with longer time to conversion. Results emphasize that the predictive strength of cognitive variables varies depending on time to conversion to dementia. Moreover, finer measures of learning captured by serial position scores were the most sensitive predictors of AD dementia. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.