Sample records for variable selection procedure

  1. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  2. CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS

    PubMed Central

    Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.

    2012-01-01

    In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388

  3. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    PubMed Central

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802

  4. Canonical Measure of Correlation (CMC) and Canonical Measure of Distance (CMD) between sets of data. Part 3. Variable selection in classification.

    PubMed

    Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto

    2010-01-11

    In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.

  5. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  6. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  7. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  8. A survey of variable selection methods in two Chinese epidemiology journals

    PubMed Central

    2010-01-01

    Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163) via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44%) used stepwise procedures, 89 (41%) tested individual regression coefficients, but 33 (15%) did not mention how variables were selected. Sixty percent (58/97) of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals. PMID:20920252

  9. Variable Selection for Regression Models of Percentile Flows

    NASA Astrophysics Data System (ADS)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    PubMed

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.

  11. Variable selection in subdistribution hazard frailty models with competing risks data

    PubMed Central

    Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo

    2014-01-01

    The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872

  12. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    PubMed Central

    Craig, Marlies H; Sharp, Brian L; Mabaso, Musawenkosi LH; Kleinschmidt, Immo

    2007-01-01

    Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa) project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have produced a highly plausible and parsimonious model of historical malaria risk for Botswana from point-referenced data from a 1961/2 prevalence survey of malaria infection in 1–14 year old children. After starting with a list of 50 potential variables we ended with three highly plausible predictors, by applying a systematic and repeatable staged variable selection procedure that included a spatial analysis, which has application for other environmentally determined infectious diseases. All this was accomplished using general-purpose statistical software. PMID:17892584

  13. A regularized variable selection procedure in additive hazards model with stratified case-cohort design.

    PubMed

    Ni, Ai; Cai, Jianwen

    2018-07-01

    Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.

  14. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  15. Assessing the accuracy and stability of variable selection ...

    EPA Pesticide Factsheets

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti

  16. Covariate Selection for Multilevel Models with Missing Data

    PubMed Central

    Marino, Miguel; Buxton, Orfeu M.; Li, Yi

    2017-01-01

    Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457

  17. Penalized regression procedures for variable selection in the potential outcomes framework

    PubMed Central

    Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L.

    2015-01-01

    A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple ‘impute, then select’ class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems, and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data and imputation are drawn. A difference LASSO algorithm is defined, along with its multiple imputation analogues. The procedures are illustrated using a well-known right heart catheterization dataset. PMID:25628185

  18. Efficient robust doubly adaptive regularized regression with applications.

    PubMed

    Karunamuni, Rohana J; Kong, Linglong; Tu, Wei

    2018-01-01

    We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.

  19. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.

    PubMed

    Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen

    2011-04-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.

  20. Surface Estimation, Variable Selection, and the Nonparametric Oracle Property

    PubMed Central

    Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen

    2010-01-01

    Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586

  1. Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…

  2. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  3. Linear and nonlinear variable selection in competing risks data.

    PubMed

    Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng

    2018-06-15

    Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA

    PubMed Central

    Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui

    2014-01-01

    Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792

  5. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  6. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  7. Variable selection in discrete survival models including heterogeneity.

    PubMed

    Groll, Andreas; Tutz, Gerhard

    2017-04-01

    Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.

  8. Procedures for shape optimization of gas turbine disks

    NASA Technical Reports Server (NTRS)

    Cheu, Tsu-Chien

    1989-01-01

    Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.

  9. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  10. Data re-arranging techniques leading to proper variable selections in high energy physics

    NASA Astrophysics Data System (ADS)

    Kůs, Václav; Bouř, Petr

    2017-12-01

    We introduce a new data based approach to homogeneity testing and variable selection carried out in high energy physics experiments, where one of the basic tasks is to test the homogeneity of weighted samples, mainly the Monte Carlo simulations (weighted) and real data measurements (unweighted). This technique is called ’data re-arranging’ and it enables variable selection performed by means of the classical statistical homogeneity tests such as Kolmogorov-Smirnov, Anderson-Darling, or Pearson’s chi-square divergence test. P-values of our variants of homogeneity tests are investigated and the empirical verification through 46 dimensional high energy particle physics data sets is accomplished under newly proposed (equiprobable) quantile binning. Particularly, the procedure of homogeneity testing is applied to re-arranged Monte Carlo samples and real DATA sets measured at the particle accelerator Tevatron in Fermilab at DØ experiment originating from top-antitop quark pair production in two decay channels (electron, muon) with 2, 3, or 4+ jets detected. Finally, the variable selections in the electron and muon channels induced by the re-arranging procedure for homogeneity testing are provided for Tevatron top-antitop quark data sets.

  11. Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification

    PubMed Central

    Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.

    2013-01-01

    Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761

  12. An examination of predictive variables toward graduation of minority students in science at a selected urban university

    NASA Astrophysics Data System (ADS)

    Hunter, Evelyn M. Irving

    1998-12-01

    The purpose of this study was to examine the relationship and predictive power of the variables gender, high school GPA, class rank, SAT scores, ACT scores, and socioeconomic status on the graduation rates of minority college students majoring in the sciences at a selected urban university. Data was examined on these variables as they related to minority students majoring in science. The population consisted of 101 minority college students who had majored in the sciences from 1986 to 1996 at an urban university in the southwestern region of Texas. A non-probability sampling procedure was used in this study. The non-probability sampling procedure in this investigation was incidental sampling technique. A profile sheet was developed to record the information regarding the variables. The composite scores from SAT and ACT testing were used in the study. The dichotomous variables gender and socioeconomic status were dummy coded for analysis. For the gender variable, zero (0) indicated male, and one (1) indicated female. Additionally, zero (0) indicated high SES, and one (1) indicated low SES. Two parametric procedures were used to analyze the data in this investigation. They were the multiple correlation and multiple regression procedures. Multiple correlation is a statistical technique that indicates the relationship between one variable and a combination of two other variables. The variables socioeconomic status and GPA were found to contribute significantly to the graduation rates of minority students majoring in all sciences when combined with chemistry (Hypotheses Two and Four). These variables accounted for 7% and 15% of the respective variance in the graduation rates of minority students in the sciences and in chemistry. Hypotheses One and Three, the predictor variables gender, high school GPA, SAT Total Scores, class rank, and socioeconomic status did not contribute significantly to the graduation rates of minority students in biology and pharmacy.

  13. Metacommunity composition of web-spiders in a fragmented neotropical forest: relative importance of environmental and spatial effects.

    PubMed

    Baldissera, Ronei; Rodrigues, Everton N L; Hartz, Sandra M

    2012-01-01

    The distribution of beta diversity is shaped by factors linked to environmental and spatial control. The relative importance of both processes in structuring spider metacommunities has not yet been investigated in the Atlantic Forest. The variance explained by purely environmental, spatially structured environmental, and purely spatial components was compared for a metacommunity of web spiders. The study was carried out in 16 patches of Atlantic Forest in southern Brazil. Field work was done in one landscape mosaic representing a slight gradient of urbanization. Environmental variables encompassed plot- and patch-level measurements and a climatic matrix, while principal coordinates of neighbor matrices (PCNMs) acted as spatial variables. A forward selection procedure was carried out to select environmental and spatial variables influencing web-spider beta diversity. Variation partitioning was used to estimate the contribution of pure environmental and pure spatial effects and their shared influence on beta-diversity patterns, and to estimate the relative importance of selected environmental variables. Three environmental variables (bush density, land use in the surroundings of patches, and shape of patches) and two spatial variables were selected by forward selection procedures. Variation partitioning revealed that 15% of the variation of beta diversity was explained by a combination of environmental and PCNM variables. Most of this variation (12%) corresponded to pure environmental and spatially environmental structure. The data indicated that (1) spatial legacy was not important in explaining the web-spider beta diversity; (2) environmental predictors explained a significant portion of the variation in web-spider composition; (3) one-third of environmental variation was due to a spatial structure that jointly explains variation in species distributions. We were able to detect important factors related to matrix management influencing the web-spider beta-diversity patterns, which are probably linked to historical deforestation events.

  14. Resampling procedures to identify important SNPs using a consensus approach.

    PubMed

    Pardy, Christopher; Motyer, Allan; Wilson, Susan

    2011-11-29

    Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.

  15. A direct-gradient multivariate index of biotic condition

    USGS Publications Warehouse

    Miranda, Leandro E.; Aycock, J.N.; Killgore, K. J.

    2012-01-01

    Multimetric indexes constructed by summing metric scores have been criticized despite many of their merits. A leading criticism is the potential for investigator bias involved in metric selection and scoring. Often there is a large number of competing metrics equally well correlated with environmental stressors, requiring a judgment call by the investigator to select the most suitable metrics to include in the index and how to score them. Data-driven procedures for multimetric index formulation published during the last decade have reduced this limitation, yet apprehension remains. Multivariate approaches that select metrics with statistical algorithms may reduce the level of investigator bias and alleviate a weakness of multimetric indexes. We investigated the suitability of a direct-gradient multivariate procedure to derive an index of biotic condition for fish assemblages in oxbow lakes in the Lower Mississippi Alluvial Valley. Although this multivariate procedure also requires that the investigator identify a set of suitable metrics potentially associated with a set of environmental stressors, it is different from multimetric procedures because it limits investigator judgment in selecting a subset of biotic metrics to include in the index and because it produces metric weights suitable for computation of index scores. The procedure, applied to a sample of 35 competing biotic metrics measured at 50 oxbow lakes distributed over a wide geographical region in the Lower Mississippi Alluvial Valley, selected 11 metrics that adequately indexed the biotic condition of five test lakes. Because the multivariate index includes only metrics that explain the maximum variability in the stressor variables rather than a balanced set of metrics chosen to reflect various fish assemblage attributes, it is fundamentally different from multimetric indexes of biotic integrity with advantages and disadvantages. As such, it provides an alternative to multimetric procedures.

  16. Variables selection methods in near-infrared spectroscopy.

    PubMed

    Xiaobo, Zou; Jiewen, Zhao; Povey, Malcolm J W; Holmes, Mel; Hanpin, Mao

    2010-05-14

    Near-infrared (NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields, such as the petrochemical, pharmaceutical, environmental, clinical, agricultural, food and biomedical sectors during the past 15 years. A NIR spectrum of a sample is typically measured by modern scanning instruments at hundreds of equally spaced wavelengths. The large number of spectral variables in most data sets encountered in NIR spectral chemometrics often renders the prediction of a dependent variable unreliable. Recently, considerable effort has been directed towards developing and evaluating different procedures that objectively identify variables which contribute useful information and/or eliminate variables containing mostly noise. This review focuses on the variable selection methods in NIR spectroscopy. Selection methods include some classical approaches, such as manual approach (knowledge based selection), "Univariate" and "Sequential" selection methods; sophisticated methods such as successive projections algorithm (SPA) and uninformative variable elimination (UVE), elaborate search-based strategies such as simulated annealing (SA), artificial neural networks (ANN) and genetic algorithms (GAs) and interval base algorithms such as interval partial least squares (iPLS), windows PLS and iterative PLS. Wavelength selection with B-spline, Kalman filtering, Fisher's weights and Bayesian are also mentioned. Finally, the websites of some variable selection software and toolboxes for non-commercial use are given. Copyright 2010 Elsevier B.V. All rights reserved.

  17. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  18. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  19. Hasse diagram as a green analytical metrics tool: ranking of methods for benzo[a]pyrene determination in sediments.

    PubMed

    Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek

    2016-05-01

    This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits.

  20. Methodological development for selection of significant predictors explaining fatal road accidents.

    PubMed

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  1. Variable Selection through Correlation Sifting

    NASA Astrophysics Data System (ADS)

    Huang, Jim C.; Jojic, Nebojsa

    Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.

  2. Weighting Test Samples in IRT Linking and Equating: Toward an Improved Sampling Design for Complex Equating. Research Report. ETS RR-13-39

    ERIC Educational Resources Information Center

    Qian, Jiahe; Jiang, Yanming; von Davier, Alina A.

    2013-01-01

    Several factors could cause variability in item response theory (IRT) linking and equating procedures, such as the variability across examinee samples and/or test items, seasonality, regional differences, native language diversity, gender, and other demographic variables. Hence, the following question arises: Is it possible to select optimal…

  3. FIRE: an SPSS program for variable selection in multiple linear regression analysis via the relative importance of predictors.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2011-03-01

    We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.

  4. Detection of Nitrogen Content in Rubber Leaves Using Near-Infrared (NIR) Spectroscopy with Correlation-Based Successive Projections Algorithm (SPA).

    PubMed

    Tang, Rongnian; Chen, Xupeng; Li, Chuang

    2018-05-01

    Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.

  5. Identification of Genes Involved in Breast Cancer Metastasis by Integrating Protein-Protein Interaction Information with Expression Data.

    PubMed

    Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran

    2017-02-01

    The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.

  6. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  7. Predicting the In-Hospital Responsiveness to Treatment of Alcoholics. Social Factors as Predictors of Outcome. Brain Damage as a Factor in Treatment Outcome of Chronic Alcoholic Patients.

    ERIC Educational Resources Information Center

    Mascia, George V.; And Others

    The authors attempt to locate predictor variables associated with the outcome of alcoholic treatment programs. Muscia's study focuses on the predictive potential of: (1) response to a GSR conditioning procedure; (2) several personality variables; and (3) age and IQ measures. Nine variables, reflecting diverse perspectives, were selected as a basis…

  8. Methods of rapid, early selection of poplar clones for maximum yield potential: a manual of procedures.

    Treesearch

    USDA FS

    1982-01-01

    Instructions, illustrated with examples and experimental results, are given for the controlled-environment propagation and selection of poplar clones. Greenhouse and growth-room culture of poplar stock plants and scions are described, and statistical techniques for discriminating among clones on the basis of growth variables are emphasized.

  9. Inter-rater reliability of select physical examination procedures in patients with neck pain.

    PubMed

    Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A

    2014-07-01

    This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.

  10. Evaluation of variable selection methods for random forests and omics data sets.

    PubMed

    Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke

    2017-10-16

    Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.

  11. Cost Accounting as a Tool for Increasing Cost Transparency in Selective Hepatic Transarterial Chemoembolization.

    PubMed

    Ahmed, Osman; Patel, Mikin; Ward, Thomas; Sze, Daniel Y; Telischak, Kristen; Kothary, Nishita; Hofmann, Lawrence V

    2015-12-01

    To increase cost transparency and uncover potential areas for savings in patients receiving selective transarterial chemoembolization at a tertiary care academic center. The hospital cost accounting system charge master sheet for direct and total costs associated with selective transarterial chemoembolization in fiscal years 2013 and 2014 was queried for each of the four highest volume interventional radiologists at a single institution. There were 517 cases (range, 83-150 per physician) performed; direct costs incurred relating to care before, during, and after the procedure with respect to labor, supply, and equipment fees were calculated. A median of 48 activity codes were charged per selective transarterial chemoembolization from five cost centers, represented by the angiography suite, units for care before and after the procedure, pharmacy, and observation floors. The average direct cost of selective transarterial chemoembolization did not significantly differ among operators at $9,126.94, $8,768.77, $9,027.33, and $8,909.75 (P = .31). Intraprocedural costs accounted for 82.8% of total direct costs and provided the greatest degree in cost variability ($7,268.47-$7,691.27). The differences in intraprocedural expense among providers were not statistically significant (P = .09), even when separated into more specific procedure-related labor and supply costs. Cost accounting systems could effectively be interrogated as a method for calculating direct costs associated with selective transarterial chemoembolization. The greatest source of expenditure and variability in cost among providers was shown to be intraprocedural labor and supplies, although the effect did not appear to be operator dependent. Copyright © 2015 SIR. Published by Elsevier Inc. All rights reserved.

  12. A Review of Validation Research on Psychological Variables Used in Hiring Police Officers.

    ERIC Educational Resources Information Center

    Malouff, John M.; Schutte Nicola S.

    This paper reviews the methods and findings of published research on the validity of police selection procedures. As a preface to the review, the typical police officer selection process is briefly described. Several common methodological deficiencies of the validation research are identified and discussed in detail: (1) use of past-selection…

  13. Improving observational study estimates of treatment effects using joint modeling of selection effects and outcomes: the case of AAA repair.

    PubMed

    O'Malley, A James; Cotterill, Philip; Schermerhorn, Marc L; Landon, Bruce E

    2011-12-01

    When 2 treatment approaches are available, there are likely to be unmeasured confounders that influence choice of procedure, which complicates estimation of the causal effect of treatment on outcomes using observational data. To estimate the effect of endovascular (endo) versus open surgical (open) repair, including possible modification by institutional volume, on survival after treatment for abdominal aortic aneurysm, accounting for observed and unobserved confounding variables. Observational study of data from the Medicare program using a joint model of treatment selection and survival given treatment to estimate the effects of type of surgery and institutional volume on survival. We studied 61,414 eligible repairs of intact abdominal aortic aneurysms during 2001 to 2004. The outcome, perioperative death, is defined as in-hospital death or death within 30 days of operation. The key predictors are use of endo, transformed endo and open volume, and endo-volume interactions. There is strong evidence of nonrandom selection of treatment with potential confounding variables including institutional volume and procedure date, variables not typically adjusted for in clinical trials. The best fitting model included heterogeneous transformations of endo volume for endo cases and open volume for open cases as predictors. Consistent with our hypothesis, accounting for unmeasured selection reduced the mortality benefit of endo. The effect of endo versus open surgery varies nonlinearly with endo and open volume. Accounting for institutional experience and unmeasured selection enables better decision-making by physicians making treatment referrals, investigators evaluating treatments, and policy makers.

  14. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  15. Lipreading in the prelingually deaf: what makes a skilled speechreader?

    PubMed

    Rodríguez Ortiz, Isabel de los Reyes

    2008-11-01

    Lipreading proficiency was investigated in a group of hearing-impaired people, all of them knowing Spanish Sign Language (SSL). The aim of this study was to establish the relationships between lipreading and some other variables (gender, intelligence, audiological variables, participants' education, parents' education, communication practices, intelligibility, use of SSL). The 32 participants were between 14 and 47 years of age. They all had sensorineural hearing losses (from severe to profound). The lipreading procedures comprised identification of words in isolation. The words selected for presentation in isolation were spoken by the same talker. Identification of words required participants to select their responses from set of four pictures appropriately labelled. Lipreading was significantly correlated with intelligence and intelligibility. Multiple regression analyses were used to obtain a prediction equation for the lipreading measures. As a result of this procedure, it is concluded that proficient deaf lipreaders are more intelligent and their oral speech was more comprehensible for others.

  16. Estimating Selected Streamflow Statistics Representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2008-01-01

    Regional equations and procedures were developed for estimating 1-, 3-, 7-, 14-, and 30-day 2-year; 1-, 3-, 7-, 14-, and 30-day 5-year; and 1-, 3-, 7-, 14-, and 30-day 10-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the 1-day, 3-year and 4-day, 3-year biologically based low-flow frequency values; the U.S. Environmental Protection Agency harmonic-mean flows; and the 10-, 25-, 50-, 75-, and 90-percent flow-duration values. Regional equations were developed using ordinary least-squares regression using statistics from 117 U.S. Geological Survey continuous streamflow-gaging stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia - North, South-Central, and Eastern Panhandle - were determined. Drainage area, precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. Estimating procedures are presented for determining statistics at a gaging station, a partial-record station, and an ungaged location. Examples of some estimating procedures are presented.

  17. Relationships among Selected Factors and the Sight-Reading Ability of High School Mixed Choirs.

    ERIC Educational Resources Information Center

    Daniels, Rose Dwiggins

    1986-01-01

    This study investigated the relationships among sight-reading ability and selected variables for 20 high school choirs. Results showed that the best predictors of sight-reading ability are ethnic makeup of the school, presence of a piano in the home, a rural school location, and occasional use of rote procedures to teach music. (JDH)

  18. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.

  19. Variable selection under multiple imputation using the bootstrap in a prognostic study

    PubMed Central

    Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW

    2007-01-01

    Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912

  20. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    PubMed

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.

  1. A Geographically Variable Water Quality Index Used in Oregon.

    ERIC Educational Resources Information Center

    Dunnette, D. A.

    1979-01-01

    Discusses the procedure developed in Oregon to formulate a valid water quality index which accounts for the specific conditions in the water body of interest. Parameters selected include oxygen depletion, BOD, eutrophication, dissolved substances, health hazards, and physical characteristics. (CS)

  2. Volume 19, Issue8 (December 2004)Articles in the Current Issue:Research ArticleTowards automation of palynology 1: analysis of pollen shape and ornamentation using simple geometric measures, derived from scanning electron microscope images

    NASA Astrophysics Data System (ADS)

    Treloar, W. J.; Taylor, G. E.; Flenley, J. R.

    2004-12-01

    This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright

  3. Continuously Variable Rating: a new, simple and logical procedure to evaluate original scientific publications

    PubMed Central

    Silva, Mauricio Rocha e

    2011-01-01

    OBJECTIVE: Impact Factors (IF) are widely used surrogates to evaluate single articles, in spite of known shortcomings imposed by cite distribution skewness. We quantify this asymmetry and propose a simple computer-based procedure for evaluating individual articles. METHOD: (a) Analysis of symmetry. Journals clustered around nine Impact Factor points were selected from the medical “Subject Categories” in Journal Citation Reports 2010. Citable items published in 2008 were retrieved and ranked by granted citations over the Jan/2008 - Jun/2011 period. Frequency distribution of cites, normalized cumulative cites and absolute cites/decile were determined for each journal cluster. (b) Positive Predictive Value. Three arbitrarily established evaluation classes were generated: LOW (1.3≤IF<2.6); MID: (2.6≤IF<3.9); HIGH: (IF≥3.9). Positive Predictive Value for journal clusters within each class range was estimated. (c) Continuously Variable Rating. An alternative evaluation procedure is proposed to allow the rating of individually published articles in comparison to all articles published in the same journal within the same year of publication. The general guiding lines for the construction of a totally dedicated software program are delineated. RESULTS AND CONCLUSIONS: Skewness followed the Pareto Distribution for (1

  4. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.

  5. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  6. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.

  7. Quantifying interindividual variability and asymmetry of face-selective regions: a probabilistic functional atlas.

    PubMed

    Zhen, Zonglei; Yang, Zetian; Huang, Lijie; Kong, Xiang-Zhen; Wang, Xu; Dang, Xiaobin; Huang, Yangyue; Song, Yiying; Liu, Jia

    2015-06-01

    Face-selective regions (FSRs) are among the most widely studied functional regions in the human brain. However, individual variability of the FSRs has not been well quantified. Here we use functional magnetic resonance imaging (fMRI) to localize the FSRs and quantify their spatial and functional variabilities in 202 healthy adults. The occipital face area (OFA), posterior and anterior fusiform face areas (pFFA and aFFA), posterior continuation of the superior temporal sulcus (pcSTS), and posterior and anterior STS (pSTS and aSTS) were delineated for each individual with a semi-automated procedure. A probabilistic atlas was constructed to characterize their interindividual variability, revealing that the FSRs were highly variable in location and extent across subjects. The variability of FSRs was further quantified on both functional (i.e., face selectivity) and spatial (i.e., volume, location of peak activation, and anatomical location) features. Considerable interindividual variability and rightward asymmetry were found in all FSRs on these features. Taken together, our work presents the first effort to characterize comprehensively the variability of FSRs in a large sample of healthy subjects, and invites future work on the origin of the variability and its relation to individual differences in behavioral performance. Moreover, the probabilistic functional atlas will provide an adequate spatial reference for mapping the face network. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. BSL-3 laboratory practices in the United States: comparison of select agent and non-select agent facilities.

    PubMed

    Richards, Stephanie L; Pompei, Victoria C; Anderson, Alice

    2014-01-01

    New construction of biosafety level 3 (BSL-3) laboratories in the United States has increased in the past decade to facilitate research on potential bioterrorism agents. The Centers for Disease Control and Prevention inspect BSL-3 facilities and review commissioning documentation, but no single agency has oversight over all BSL-3 facilities. This article explores the extent to which standard operating procedures in US BSL-3 facilities vary between laboratories with select agent or non-select agent status. Comparisons are made for the following variables: personnel training, decontamination, personal protective equipment (PPE), medical surveillance, security access, laboratory structure and maintenance, funding, and pest management. Facilities working with select agents had more complex training programs and decontamination procedures than non-select agent facilities. Personnel working in select agent laboratories were likely to use powered air purifying respirators, while non-select agent laboratories primarily used N95 respirators. More rigorous medical surveillance was carried out in select agent workers (although not required by the select agent program) and a higher level of restrictive access to laboratories was found. Most select agent and non-select agent laboratories reported adequate structural integrity in facilities; however, differences were observed in personnel perception of funding for repairs. Pest management was carried out by select agent personnel more frequently than non-select agent personnel. Our findings support the need to promote high quality biosafety training and standard operating procedures in both select agent and non-select agent laboratories to improve occupational health and safety.

  9. BSL-3 Laboratory Practices in the United States: Comparison of Select Agent and Non–Select Agent Facilities

    PubMed Central

    Pompei, Victoria C.; Anderson, Alice

    2014-01-01

    New construction of biosafety level 3 (BSL-3) laboratories in the United States has increased in the past decade to facilitate research on potential bioterrorism agents. The Centers for Disease Control and Prevention inspect BSL-3 facilities and review commissioning documentation, but no single agency has oversight over all BSL-3 facilities. This article explores the extent to which standard operating procedures in US BSL-3 facilities vary between laboratories with select agent or non–select agent status. Comparisons are made for the following variables: personnel training, decontamination, personal protective equipment (PPE), medical surveillance, security access, laboratory structure and maintenance, funding, and pest management. Facilities working with select agents had more complex training programs and decontamination procedures than non–select agent facilities. Personnel working in select agent laboratories were likely to use powered air purifying respirators, while non–select agent laboratories primarily used N95 respirators. More rigorous medical surveillance was carried out in select agent workers (although not required by the select agent program) and a higher level of restrictive access to laboratories was found. Most select agent and non–select agent laboratories reported adequate structural integrity in facilities; however, differences were observed in personnel perception of funding for repairs. Pest management was carried out by select agent personnel more frequently than non–select agent personnel. Our findings support the need to promote high quality biosafety training and standard operating procedures in both select agent and non–select agent laboratories to improve occupational health and safety. PMID:24552359

  10. Application of different spectrophotometric methods for simultaneous determination of elbasvir and grazoprevir in pharmaceutical preparation

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Abdelazim, Ahmed H.

    2018-01-01

    The first three UV spectrophotometric methods have been developed of simultaneous determination of two new FDA approved drugs namely; elbasvir and grazoprevir in their combined pharmaceutical dosage form. These methods include simultaneous equation, partial least squares with and without variable selection procedure (genetic algorithm). For simultaneous equation method, the absorbance values at 369 (λmax of elbasvir) and 253 nm (λmax of grazoprevir) have been selected for the formation of two simultaneous equations required for the mathematical processing and quantitative analysis of the studied drugs. Alternatively, the partial least squares with and without variable selection procedure (genetic algorithm) have been applied in the spectra analysis because the synchronous inclusion of many unreal wavelengths rather than by using a single or dual wavelength which greatly increases the precision and predictive ability of the methods. Successfully assay of the drugs in their pharmaceutical formulation has been done by the proposed methods. Statistically comparative analysis for the obtained results with the manufacturing methods has been performed. It is noteworthy to mention that there was no significant difference between the proposed methods and the manufacturing one with respect to the validation parameters.

  11. Variation in levels of some leaf enzymes.

    PubMed

    Downton, J; Slatyer, R O

    1971-03-01

    Several procedures were compared for efficiency in the extraction of certain leaf enzymes (phosphoenolpyruvate carboxylase, ribulose 1,5-diphosphate carboxylase and malate dehydrogenase) in Atriplex hastata (a "C3" species exhibiting conventional photosynthetic metabolism), and in A. spongiosa (a "C4" species in which the initial photosynthetic products are C4 dicarboxylic acids). Glycolate oxidase was also assayed in some cases, and Atriplex nummularia and Sorghum bicolor were also used as test material. A simple procedure, involving a mortar and pestle grind with carborundum added to the grinding mixture, was found to be as effective as glass bead grind procedures. In addition, it was more rapid and showed less variability with different operations.Using the carborundum grind procedure, sources of variability in enzyme activity in apparently uniform leaves were compared, as were effects of time of day, leaf age and storage procedure. In general, if apparently uniform leaves could be selected, variability in levels of enzyme activity appeared to be relatively small, not exceeding about 12%. Time of day also appeared to be relatively unimportant for the enzymes examined. However, the ontogentic status of the plant was found to be an important source of variability. Leaf age was also a major source of variability where the activity was expressed on a fresh weight basis, but specific activity (i.e. activity expressed on a protein basis) was relatively constant, at least with the range of species and leaf ages examined here.Storage of fresh samples in liquid nitrogen for 24 h, prior to extraction and assay, led to only a small reduction in activity, but substantial changes occurred if storage was in dry ice or in ice and also where extracts were stored in a deep freeze.

  12. Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.

    PubMed

    Saucier, Gerard

    2009-10-01

    Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.

  13. Selected meteorological data for an arid climate over bare soil near Beatty, Nye County, Nevada, November 1977 through May 1980

    USGS Publications Warehouse

    Brown, Robin G.; Nichols, William D.

    1990-01-01

    Meteorological data were collected over bare soil at a site for low-level radioactive-waste burial near Beatty, Nevada, from November 1977 to May 1980. The data include precipitation, windspeed, wind direction, incident solar radiation, reflected solar radiation, net radiation, dry- and wet-bulb air temperatures at three heights, soil temperature at five depths, and soil-heat flux at three depths. Mean relative humidity was computed for each day of the collection period for which data are available.A discussion is presented of the study site and the instrumentation and procedures used for collecting and processing the data. Selected data from November 1977 to May 1980 are presented in tabular form. Diurnal fluctuations of selected meteorological variables for representative summer and winter periods are graphically presented. The effects on selected variables of a partial solar eclipse are also discussed

  14. Additional Samples: Where They Should Be Located

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilger, G. G., E-mail: jfelipe@ufrgs.br; Costa, J. F. C. L.; Koppe, J. C.

    2001-09-15

    Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additionalmore » sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.« less

  15. Resurgence of target responding does not exceed increases in inactive responding in a forced-choice alternative reinforcement procedure in humans

    PubMed Central

    Sweeney, Mary M.; Shahan, Timothy A.

    2016-01-01

    Resurgence following removal of alternative reinforcement has been studied in non-human animals, children with developmental disabilities, and typically functioning adults. Adult human laboratory studies have included responses without a controlled history of reinforcement, included only two response options, or involved extensive training. Arbitrary responses allow for control over history of reinforcement. Including an inactive response never associated with reinforcement allows the conclusion that resurgence exceeds extinction-induced variability. Although procedures with extensive training produce reliable resurgence, a brief procedure with the same experimental control would allow more efficient examination of resurgence in adult humans. We tested the acceptability of a brief, single-session, three-alternative forced-choice procedure as a model of resurgence in undergraduates. Selecting a shape was the target response (reinforced in Phase I), selecting another shape was the alternative response (reinforced in Phase II), and selecting a third shape was never reinforced. Despite manipulating number of trials and probability of reinforcement, resurgence of the target response did not consistently exceed increases in the inactive response. Our findings reiterate the importance of an inactive control response and call for reexamination of resurgence studies using only two response options. We discuss potential approaches to generate an acceptable, brief human laboratory resurgence procedure. PMID:26724752

  16. Development of toughened epoxy polymers for high performance composite and ablative applications

    NASA Technical Reports Server (NTRS)

    Allen, V. R.

    1982-01-01

    A survey of current procedures for the assessment of state of cure in epoxy polymers and for the evaluation of polymer toughness as related to nature of the crosslinking agent was made to facilitate a cause-effect study of the chemical modification of epoxy polymers. Various conformations of sample morphology were examined to identify testing variables and to establish optimum conditions for the selected physical test methods. Dynamic viscoelasticity testing was examined in conjunction with chemical analyses to allow observation of the extent of the curing reaction with size of the crosslinking agent the primary variable. Specifically the aims of the project were twofold: (1) to consider the experimental variables associated with development of "extent of cure" analysis, and (2) to assess methodology of fracture energy determination and to prescribe a meaningful and reproducible procedure. The following is separated into two categories for ease of presentation.

  17. Modelling of binary logistic regression for obesity among secondary students in a rural area of Kedah

    NASA Astrophysics Data System (ADS)

    Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.

  18. A method for the dynamic management of genetic variability in dairy cattle

    PubMed Central

    Colleau, Jean-Jacques; Moureaux, Sophie; Briend, Michèle; Bechu, Jérôme

    2004-01-01

    According to the general approach developed in this paper, dynamic management of genetic variability in selected populations of dairy cattle is carried out for three simultaneous purposes: procreation of young bulls to be further progeny-tested, use of service bulls already selected and approval of recently progeny-tested bulls for use. At each step, the objective is to minimize the average pairwise relationship coefficient in the future population born from programmed matings and the existing population. As a common constraint, the average estimated breeding value of the new population, for a selection goal including many important traits, is set to a desired value. For the procreation of young bulls, breeding costs are additionally constrained. Optimization is fully analytical and directly considers matings. Corresponding algorithms are presented in detail. The efficiency of these procedures was tested on the current Norman population. Comparisons between optimized and real matings, clearly showed that optimization would have saved substantial genetic variability without reducing short-term genetic gains. PMID:15231230

  19. Membrane Filter Technique for Enumeration of Enterococci in Marine Waters

    PubMed Central

    Levin, M. A.; Fischer, J. R.; Cabelli, V. J.

    1975-01-01

    A membrane filter procedure is described for the enumeration of enterococci in marine waters. The procedure utilizes a highly selective and somewhat differential primary isolation medium followed by an in situ substrate test for identifying colonies of those organisms capable of hydrolyzing esculin. The procedure (mE) was evaluated with known streptococci strains and field samples with regard to its accuracy, sensitivity, selectivity, specificity, precision, and comparability to existing methods. Essentially quantitative recovery was obtained with seawater-stressed cells of Streptococcus faecalis and S. faccium. Neither S. bovis, S. equinus, S. mitis, nor S. salivarius grew on the medium. The selectivity of the medium was such that a 10,000-fold reduction in background organisms was obtained relative to a medium which contained no inhibitors and was incubated at 35 C. About 90% of those typical colonies designated as enterococci confirmed as such and about 12% of the colonies not so designated were, in fact, identified as enterococci. Plate to plate variability across samples approximated that expected by chance alone. Verified recoveries of enterococci from natural samples by the mE procedure, on the average, exceeded those by the KF method by one order of magnitude. PMID:807165

  20. The Relationship of Teacher Evaluation Scores Generated by a Process-Product Evaluation Instrument to Selected Variables.

    ERIC Educational Resources Information Center

    Tadlock, James; Nesbit, Lamar

    The Jackson Municipal Separate School District, Mississippi, has instituted a mixed-criteria reduction-in-force procedure emphasizing classroom performance to a greater degree than seniority, certification, and staff development participation. The district evaluation process--measuring classroom teaching performance--generated data for the present…

  1. Utility of Extinction-Induced Response Variability for the Selection of Mands

    ERIC Educational Resources Information Center

    Grow, Laura L.; Kelley, Michael E.; Roane, Henry S.; Shillingsburg, M. Alice

    2008-01-01

    Functional communication training (FCT; Carr & Durand, 1985) is a commonly used differential reinforcement procedure for replacing problem behavior with socially acceptable alternative responses. Most studies in the FCT literature consist of demonstrations of the maintenance of responding when various treatment components (e.g., extinction,…

  2. Evaluation of a Pre-Treatment Assessment to Select Mand Topographies for Functional Communication Training

    ERIC Educational Resources Information Center

    Ringdahl, Joel E.; Falcomata, Terry S.; Christensen, Tory J.; Bass-Ringdahl, Sandie M.; Lentz, Alison; Dutt, Anuradha; Schuh-Claus, Jessica

    2009-01-01

    Recent research has suggested that variables related to specific mand topographies targeted during functional communication training (FCT) can affect treatment outcomes. These include effort, novelty of mands, previous relationships with problem behavior, and preference. However, there is little extant research on procedures for identifying which…

  3. A Design Selection Procedure.

    ERIC Educational Resources Information Center

    Kroeker, Leonard P.

    The problem of blocking on a status variable was investigated. The one-way fixed-effects analysis of variance, analysis of covariance, and generalized randomized block designs each treat the blocking problem in a different way. In order to compare these designs, it is necessary to restrict attention to experimental situations in which observations…

  4. Test of association: which one is the most appropriate for my study?

    PubMed

    Gonzalez-Chica, David Alejandro; Bastos, João Luiz; Duquia, Rodrigo Pereira; Bonamigo, Renan Rangel; Martínez-Mesa, Jeovany

    2015-01-01

    Hypothesis tests are statistical tools widely used for assessing whether or not there is an association between two or more variables. These tests provide a probability of the type 1 error (p-value), which is used to accept or reject the null study hypothesis. To provide a practical guide to help researchers carefully select the most appropriate procedure to answer the research question. We discuss the logic of hypothesis testing and present the prerequisites of each procedure based on practical examples.

  5. Determination of biodiesel content in biodiesel/diesel blends using NIR and visible spectroscopy with variable selection.

    PubMed

    Fernandes, David Douglas Sousa; Gomes, Adriano A; Costa, Gean Bezerra da; Silva, Gildo William B da; Véras, Germano

    2011-12-15

    This work is concerned of evaluate the use of visible and near-infrared (NIR) range, separately and combined, to determine the biodiesel content in biodiesel/diesel blends using Multiple Linear Regression (MLR) and variable selection by Successive Projections Algorithm (SPA). Full spectrum models employing Partial Least Squares (PLS) and variables selection by Stepwise (SW) regression coupled with Multiple Linear Regression (MLR) and PLS models also with variable selection by Jack-Knife (Jk) were compared the proposed methodology. Several preprocessing were evaluated, being chosen derivative Savitzky-Golay with second-order polynomial and 17-point window for NIR and visible-NIR range, with offset correction. A total of 100 blends with biodiesel content between 5 and 50% (v/v) prepared starting from ten sample of biodiesel. In the NIR and visible region the best model was the SPA-MLR using only two and eight wavelengths with RMSEP of 0.6439% (v/v) and 0.5741 respectively, while in the visible-NIR region the best model was the SW-MLR using five wavelengths and RMSEP of 0.9533% (v/v). Results indicate that both spectral ranges evaluated showed potential for developing a rapid and nondestructive method to quantify biodiesel in blends with mineral diesel. Finally, one can still mention that the improvement in terms of prediction error obtained with the procedure for variables selection was significant. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Biocontrol of Phytophthora Blight and Anthracnose in Pepper by Sequentially Selected Antagonistic Rhizobacteria against Phytophthora capsici.

    PubMed

    Sang, Mee Kyung; Shrestha, Anupama; Kim, Du-Yeon; Park, Kyungseok; Pak, Chun Ho; Kim, Ki Deok

    2013-06-01

    We previously developed a sequential screening procedure to select antagonistic bacterial strains against Phytophthora capsici in pepper plants. In this study, we used a modified screening procedure to select effective biocontrol strains against P. capsici; we evaluated the effect of selected strains on Phytophthora blight and anthracnose occurrence and fruit yield in pepper plants under field and plastic house conditions from 2007 to 2009. We selected four potential biocontrol strains (Pseudomonas otitidis YJR27, P. putida YJR92, Tsukamurella tyrosinosolvens YJR102, and Novosphingobium capsulatum YJR107) among 239 bacterial strains. In the 3-year field tests, all the selected strains significantly (P < 0.05) reduced Phytophthora blight without influencing rhizosphere microbial populations; they showed similar or better levels of disease suppressions than in metalaxyl treatment in the 2007 and 2009 tests, but not in the 2008 test. In the 2-year plastic house tests, all the selected strains significantly (P < 0.05) reduced anthracnose incidence in at least one of the test years, but their biocontrol activities were variable. In addition, strains YJR27, YJR92, and YJR102, in certain harvests, increased pepper fruit numbers in field tests and red fruit weights in plastic house tests. Taken together, these results indicate that the screening procedure is rapid and reliable for the selection of potential biocontrol strains against P. capsici in pepper plants. In addition, these selected strains exhibited biocontrol activities against anthracnose, and some of the strains showed plant growth-promotion activities on pepper fruit.

  7. Integrating biological knowledge into variable selection: an empirical Bayes approach with an application in cancer biology

    PubMed Central

    2012-01-01

    Background An important question in the analysis of biochemical data is that of identifying subsets of molecular variables that may jointly influence a biological response. Statistical variable selection methods have been widely used for this purpose. In many settings, it may be important to incorporate ancillary biological information concerning the variables of interest. Pathway and network maps are one example of a source of such information. However, although ancillary information is increasingly available, it is not always clear how it should be used nor how it should be weighted in relation to primary data. Results We put forward an approach in which biological knowledge is incorporated using informative prior distributions over variable subsets, with prior information selected and weighted in an automated, objective manner using an empirical Bayes formulation. We employ continuous, linear models with interaction terms and exploit biochemically-motivated sparsity constraints to permit exact inference. We show an example of priors for pathway- and network-based information and illustrate our proposed method on both synthetic response data and by an application to cancer drug response data. Comparisons are also made to alternative Bayesian and frequentist penalised-likelihood methods for incorporating network-based information. Conclusions The empirical Bayes method proposed here can aid prior elicitation for Bayesian variable selection studies and help to guard against mis-specification of priors. Empirical Bayes, together with the proposed pathway-based priors, results in an approach with a competitive variable selection performance. In addition, the overall procedure is fast, deterministic, and has very few user-set parameters, yet is capable of capturing interplay between molecular players. The approach presented is general and readily applicable in any setting with multiple sources of biological prior knowledge. PMID:22578440

  8. Short and Long-Term Outcomes After Surgical Procedures Lasting for More Than Six Hours.

    PubMed

    Cornellà, Natalia; Sancho, Joan; Sitges-Serra, Antonio

    2017-08-23

    Long-term all-cause mortality and dependency after complex surgical procedures have not been assessed in the framework of value-based medicine. The aim of this study was to investigate the postoperative and long-term outcomes after surgical procedures lasting for more than six hours. Retrospective cohort study of patients undergoing a first elective complex surgical procedure between 2004 and 2013. Heart and transplant surgery was excluded. Mortality and dependency from the healthcare system were selected as outcome variables. Gender, age, ASA, creatinine, albumin kinetics, complications, benign vs malignant underlying condition, number of drugs at discharge, and admission and length of stay in the ICU were recorded as predictive variables. Some 620 adult patients were included in the study. Postoperative, <1year and <5years cumulative mortality was 6.8%, 17.6% and 45%, respectively. Of patients discharged from hospital after surgery, 76% remained dependent on the healthcare system. In multivariate analysis for postoperative, <1year and <5years mortality, postoperative albumin concentration, ASA score and an ICU stay >7days, were the most significant independent predictive variables. Prolonged surgery carries a significant short and long-term mortality and disability. These data may contribute to more informed decisions taken concerning major surgery in the framework of value-based medicine.

  9. Planetarium instructional efficacy: A research synthesis

    NASA Astrophysics Data System (ADS)

    Brazell, Bruce D.

    The purpose of the current study was to explore the instructional effectiveness of the planetarium in astronomy education using meta-analysis. A review of the literature revealed 46 studies related to planetarium efficacy. However, only 19 of the studies satisfied selection criteria for inclusion in the meta-analysis. Selected studies were then subjected to coding procedures, which extracted information such as subject characteristics, experimental design, and outcome measures. From these data, 24 effect sizes were calculated in the area of student achievement and five effect sizes were determined in the area of student attitudes using reported statistical information. Mean effect sizes were calculated for both the achievement and the attitude distributions. Additionally, each effect size distribution was subjected to homogeneity analysis. The attitude distribution was found to be homogeneous with a mean effect size of -0.09, which was not significant, p = .2535. The achievement distribution was found to be heterogeneous with a statistically significant mean effect size of +0.28, p < .05. Since the achievement distribution was heterogeneous, the analog to the ANOVA procedure was employed to explore variability in this distribution in terms of the coded variables. The analog to the ANOVA procedure revealed that the variability introduced by the coded variables did not fully explain the variability in the achievement distribution beyond subject-level sampling error under a fixed effects model. Therefore, a random effects model analysis was performed which resulted in a mean effect size of +0.18, which was not significant, p = .2363. However, a large random effect variance component was determined indicating that the differences between studies were systematic and yet to be revealed. The findings of this meta-analysis showed that the planetarium has been an effective instructional tool in astronomy education in terms of student achievement. However, the meta-analysis revealed that the planetarium has not been a very effective tool for improving student attitudes towards astronomy.

  10. Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.

    PubMed

    Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A

    2017-04-15

    Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Marine Corps Requirements and Procedures for Decontamination and Collective Protection Study. Volume 1.

    DTIC Science & Technology

    1982-08-01

    results of changing selected independent variables. ri The results of the projection of chemical agent density and cloud drift, including dissapation and... combination of agent effects, creates, over time, an extensive threat to wide areas of the FBHA, according to current theories. Shifting wind patterns...will be proposed in paragraph 2.7. The selection of decontamination equipment by the study team is a result of a combination of factors. First, current

  12. Framework for adaptive multiscale analysis of nonhomogeneous point processes.

    PubMed

    Helgason, Hannes; Bartroff, Jay; Abry, Patrice

    2011-01-01

    We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.

  13. A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.

    PubMed

    Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe

    2014-01-01

    Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.

  14. Random Survival Forest in practice: a method for modelling complex metabolomics data in time to event analysis.

    PubMed

    Dietrich, Stefan; Floegel, Anna; Troll, Martina; Kühn, Tilman; Rathmann, Wolfgang; Peters, Anette; Sookthai, Disorn; von Bergen, Martin; Kaaks, Rudolf; Adamski, Jerzy; Prehn, Cornelia; Boeing, Heiner; Schulze, Matthias B; Illig, Thomas; Pischon, Tobias; Knüppel, Sven; Wang-Sattler, Rui; Drogan, Dagmar

    2016-10-01

    The application of metabolomics in prospective cohort studies is statistically challenging. Given the importance of appropriate statistical methods for selection of disease-associated metabolites in highly correlated complex data, we combined random survival forest (RSF) with an automated backward elimination procedure that addresses such issues. Our RSF approach was illustrated with data from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, with concentrations of 127 serum metabolites as exposure variables and time to development of type 2 diabetes mellitus (T2D) as outcome variable. Out of this data set, Cox regression with a stepwise selection method was recently published. Replication of methodical comparison (RSF and Cox regression) was conducted in two independent cohorts. Finally, the R-code for implementing the metabolite selection procedure into the RSF-syntax is provided. The application of the RSF approach in EPIC-Potsdam resulted in the identification of 16 incident T2D-associated metabolites which slightly improved prediction of T2D when used in addition to traditional T2D risk factors and also when used together with classical biomarkers. The identified metabolites partly agreed with previous findings using Cox regression, though RSF selected a higher number of highly correlated metabolites. The RSF method appeared to be a promising approach for identification of disease-associated variables in complex data with time to event as outcome. The demonstrated RSF approach provides comparable findings as the generally used Cox regression, but also addresses the problem of multicollinearity and is suitable for high-dimensional data. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  15. Screening and clustering of sparse regressions with finite non-Gaussian mixtures.

    PubMed

    Zhang, Jian

    2017-06-01

    This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.

  16. The Choice of Spatial Interpolation Method Affects Research Conclusions

    NASA Astrophysics Data System (ADS)

    Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.

    2017-12-01

    Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.

  17. A fast chaos-based image encryption scheme with a dynamic state variables selection mechanism

    NASA Astrophysics Data System (ADS)

    Chen, Jun-xin; Zhu, Zhi-liang; Fu, Chong; Yu, Hai; Zhang, Li-bo

    2015-03-01

    In recent years, a variety of chaos-based image cryptosystems have been investigated to meet the increasing demand for real-time secure image transmission. Most of them are based on permutation-diffusion architecture, in which permutation and diffusion are two independent procedures with fixed control parameters. This property results in two flaws. (1) At least two chaotic state variables are required for encrypting one plain pixel, in permutation and diffusion stages respectively. Chaotic state variables produced with high computation complexity are not sufficiently used. (2) The key stream solely depends on the secret key, and hence the cryptosystem is vulnerable against known/chosen-plaintext attacks. In this paper, a fast chaos-based image encryption scheme with a dynamic state variables selection mechanism is proposed to enhance the security and promote the efficiency of chaos-based image cryptosystems. Experimental simulations and extensive cryptanalysis have been carried out and the results prove the superior security and high efficiency of the scheme.

  18. Modelling municipal solid waste generation: a review.

    PubMed

    Beigl, Peter; Lebersorger, Sandra; Salhofer, Stefan

    2008-01-01

    The objective of this paper is to review previously published models of municipal solid waste generation and to propose an implementation guideline which will provide a compromise between information gain and cost-efficient model development. The 45 modelling approaches identified in a systematic literature review aim at explaining or estimating the present or future waste generation using economic, socio-demographic or management-orientated data. A classification was developed in order to categorise these highly heterogeneous models according to the following criteria--the regional scale, the modelled waste streams, the hypothesised independent variables and the modelling method. A procedural practice guideline was derived from a discussion of the underlying models in order to propose beneficial design options concerning regional sampling (i.e., number and size of observed areas), waste stream definition and investigation, selection of independent variables and model validation procedures. The practical application of the findings was demonstrated with two case studies performed on different regional scales, i.e., on a household and on a city level. The findings of this review are finally summarised in the form of a relevance tree for methodology selection.

  19. Genome-wide regression and prediction with the BGLR statistical package.

    PubMed

    Pérez, Paulino; de los Campos, Gustavo

    2014-10-01

    Many modern genomic data analyses require implementing regressions where the number of parameters (p, e.g., the number of marker effects) exceeds sample size (n). Implementing these large-p-with-small-n regressions poses several statistical and computational challenges, some of which can be confronted using Bayesian methods. This approach allows integrating various parametric and nonparametric shrinkage and variable selection procedures in a unified and consistent manner. The BGLR R-package implements a large collection of Bayesian regression models, including parametric variable selection and shrinkage methods and semiparametric procedures (Bayesian reproducing kernel Hilbert spaces regressions, RKHS). The software was originally developed for genomic applications; however, the methods implemented are useful for many nongenomic applications as well. The response can be continuous (censored or not) or categorical (either binary or ordinal). The algorithm is based on a Gibbs sampler with scalar updates and the implementation takes advantage of efficient compiled C and Fortran routines. In this article we describe the methods implemented in BGLR, present examples of the use of the package, and discuss practical issues emerging in real-data analysis. Copyright © 2014 by the Genetics Society of America.

  20. VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS

    PubMed Central

    Huang, Jian; Horowitz, Joel L.; Wei, Fengrong

    2010-01-01

    We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739

  1. A Rapid Approach to Modeling Species-Habitat Relationships

    NASA Technical Reports Server (NTRS)

    Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.

    2005-01-01

    A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.

  2. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Effect of genetic algorithm as a variable selection method on different chemometric models applied for the analysis of binary mixture of amoxicillin and flucloxacillin: A comparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-03-01

    Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.

  4. Choice with a fixed requirement for food, and the generality of the matching relation

    PubMed Central

    Stubbs, D. Alan; Dreyfus, Leon R.; Fetterman, J. Gregor; Dorman, Lana G.

    1986-01-01

    Pigeons were trained on choice procedures in which responses on each of two keys were reinforced probabilistically, but only after a schedule requirement had been met. Under one arrangement, a fixed-interval choice procedure was used in which responses were not reinforced until the interval was over; then a response on one key would be reinforced, with the effective key changing irregularly from interval to interval. Under a second, fixed-ratio choice procedure, responses on either key counted towards completion of the ratio and then, once the ratio had been completed, a response on the probabilistically selected key would produce food. In one experiment, the schedule requirements were varied for both fixed-interval and fixed-ratio schedules. In the second experiment, relative reinforcement rate was varied. And in a third experiment, the duration of an intertrial interval separating choices was varied. The results for 11 pigeons across all three experiments indicate that there were often large deviations between relative response rates and relative reinforcement rates. Overall performance measures were characterized by a great deal of variability across conditions. More detailed measures of choice across the schedule requirement were also quite variable across conditions. In spite of this variability, performance was consistent across conditions in its efficiency of producing food. The absence of matching of behavior allocation to reinforcement rate indicates an important difference between the present procedures and other choice procedures; that difference raises questions about the specific conditions that lead to matching as an outcome. PMID:16812452

  5. Mediation in dyadic data at the level of the dyads: a Structural Equation Modeling approach.

    PubMed

    Ledermann, Thomas; Macho, Siegfried

    2009-10-01

    An extended version of the Common Fate Model (CFM) is presented to estimate and test mediation in dyadic data. The model can be used for distinguishable dyad members (e.g., heterosexual couples) or indistinguishable dyad members (e.g., homosexual couples) if (a) the variables measure characteristics of the dyadic relationship or shared external influences that affect both partners; if (b) the causal associations between the variables should be analyzed at the dyadic level; and if (c) the measured variables are reliable indicators of the latent variables. To assess mediation using Structural Equation Modeling, a general three-step procedure is suggested. The first is a selection of a good fitting model, the second a test of the direct effects, and the third a test of the mediating effect by means of bootstrapping. The application of the model along with the procedure for assessing mediation is illustrated using data from 184 couples on marital problems, communication, and marital quality. Differences with the Actor-Partner Interdependence Model and the analysis of longitudinal mediation by using the CFM are discussed.

  6. Biocontrol of Phytophthora Blight and Anthracnose in Pepper by Sequentially Selected Antagonistic Rhizobacteria against Phytophthora capsici

    PubMed Central

    Sang, Mee Kyung; Shrestha, Anupama; Kim, Du-Yeon; Park, Kyungseok; Pak, Chun Ho; Kim, Ki Deok

    2013-01-01

    We previously developed a sequential screening procedure to select antagonistic bacterial strains against Phytophthora capsici in pepper plants. In this study, we used a modified screening procedure to select effective biocontrol strains against P. capsici; we evaluated the effect of selected strains on Phytophthora blight and anthracnose occurrence and fruit yield in pepper plants under field and plastic house conditions from 2007 to 2009. We selected four potential biocontrol strains (Pseudomonas otitidis YJR27, P. putida YJR92, Tsukamurella tyrosinosolvens YJR102, and Novosphingobium capsulatum YJR107) among 239 bacterial strains. In the 3-year field tests, all the selected strains significantly (P < 0.05) reduced Phytophthora blight without influencing rhizosphere microbial populations; they showed similar or better levels of disease suppressions than in metalaxyl treatment in the 2007 and 2009 tests, but not in the 2008 test. In the 2-year plastic house tests, all the selected strains significantly (P < 0.05) reduced anthracnose incidence in at least one of the test years, but their biocontrol activities were variable. In addition, strains YJR27, YJR92, and YJR102, in certain harvests, increased pepper fruit numbers in field tests and red fruit weights in plastic house tests. Taken together, these results indicate that the screening procedure is rapid and reliable for the selection of potential biocontrol strains against P. capsici in pepper plants. In addition, these selected strains exhibited biocontrol activities against anthracnose, and some of the strains showed plant growth-promotion activities on pepper fruit. PMID:25288942

  7. Intradural Procedural Time to Assess Technical Difficulty of Superciliary Keyhole and Pterional Approaches for Unruptured Middle Cerebral Artery Aneurysms

    PubMed Central

    Choi, Yeon-Ju; Son, Wonsoo; Park, Ki-Su

    2016-01-01

    Objective This study used the intradural procedural time to assess the overall technical difficulty involved in surgically clipping an unruptured middle cerebral artery (MCA) aneurysm via a pterional or superciliary approach. The clinical and radiological variables affecting the intradural procedural time were investigated, and the intradural procedural time compared between a superciliary keyhole approach and a pterional approach. Methods During a 5.5-year period, patients with a single MCA aneurysm were enrolled in this retrospective study. The selection criteria for a superciliary keyhole approach included : 1) maximum diameter of the unruptured MCA aneurysm <15 mm, 2) neck diameter of the MCA aneurysm <10 mm, and 3) aneurysm location involving the sphenoidal or horizontal segment of MCA (M1) segment and MCA bifurcation, excluding aneurysms distal to the MCA genu. Meanwhile, the control comparison group included patients with the same selection criteria as for a superciliary approach, yet who preferred a pterional approach to avoid a postoperative facial wound or due to preoperative skin trouble in the supraorbital area. To determine the variables affecting the intradural procedural time, a multiple regression analysis was performed using such data as the patient age and gender, maximum aneurysm diameter, aneurysm neck diameter, and length of the pre-aneurysm M1 segment. In addition, the intradural procedural times were compared between the superciliary and pterional patient groups, along with the other variables. Results A total of 160 patients underwent a superciliary (n=124) or pterional (n=36) approach for an unruptured MCA aneurysm. In the multiple regression analysis, an increase in the diameter of the aneurysm neck (p<0.001) was identified as a statistically significant factor increasing the intradural procedural time. A Pearson correlation analysis also showed a positive correlation (r=0.340) between the neck diameter and the intradural procedural time. When comparing the superciliary and pterional groups, no statistically significant between-group difference was found in terms of the intradural procedural time reflecting the technical difficulty (mean±standard deviation : 29.8±13.0 min versus 27.7±9.6 min). Conclusion A superciliary keyhole approach can be a useful alternative to a pterional approach for an unruptured MCA aneurysm with a maximum diameter <15 mm and neck diameter <10 mm, representing no more of a technical challenge. For both surgical approaches, the technical difficulty increases along with the neck diameter of the MCA aneurysm. PMID:27847568

  8. Frequency and Intensive Care Related Risk Factors of Pneumothorax in Ventilated Neonates

    PubMed Central

    Bhat Yellanthoor, Ramesh; Ramdas, Vidya

    2014-01-01

    Objectives. Relationships of mechanical ventilation to pneumothorax in neonates and care procedures in particular are rarely studied. We aimed to evaluate the relationship of selected ventilator variables and risk events to pneumothorax. Methods. Pneumothorax was defined as accumulation of air in pleural cavity as confirmed by chest radiograph. Relationship of ventilator mode, selected settings, and risk procedures prior to detection of pneumothorax was studied using matched controls. Results. Of 540 neonates receiving mechanical ventilation, 10 (1.85%) were found to have pneumothorax. Respiratory distress syndrome, meconium aspiration syndrome, and pneumonia were the underlying lung pathology. Pneumothorax mostly (80%) occurred within 48 hours of life. Among ventilated neonates, significantly higher percentage with pneumothorax received mandatory ventilation than controls (70% versus 20%; P < 0.01). Peak inspiratory pressure >20 cm H2O and overventilation were not significantly associated with pneumothorax. More cases than controls underwent care procedures in the preceding 3 hours of pneumothorax event. Mean airway pressure change (P = 0.052) and endotracheal suctioning (P = 0.05) were not significantly associated with pneumothorax. Reintubation (P = 0.003), and bagging (P = 0.015) were significantly associated with pneumothorax. Conclusion. Pneumothorax among ventilated neonates occurred at low frequency. Mandatory ventilation and selected care procedures in the preceding 3 hours had significant association. PMID:24876958

  9. Consistent model identification of varying coefficient quantile regression with BIC tuning parameter selection

    PubMed Central

    Zheng, Qi; Peng, Limin

    2016-01-01

    Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212

  10. Downscaling reanalysis data to high-resolution variables above a glacier surface (Cordillera Blanca, Peru)

    NASA Astrophysics Data System (ADS)

    Hofer, Marlis; Mölg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-05-01

    Recently initiated observation networks in the Cordillera Blanca provide temporally high-resolution, yet short-term atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly NCEP/NCAR reanalysis data to the local target variables, measured at the tropical glacier Artesonraju (Northern Cordillera Blanca). The approach is particular in the context of ESD for two reasons. First, the observational time series for model calibration are short (only about two years). Second, unlike most ESD studies in climate research, we focus on variables at a high temporal resolution (i.e., six-hourly values). Our target variables are two important drivers in the surface energy balance of tropical glaciers; air temperature and specific humidity. The selection of predictor fields from the reanalysis data is based on regression analyses and climatologic considerations. The ESD modelling procedure includes combined empirical orthogonal function and multiple regression analyses. Principal component screening is based on cross-validation using the Akaike Information Criterion as model selection criterion. Double cross-validation is applied for model evaluation. Potential autocorrelation in the time series is considered by defining the block length in the resampling procedure. Apart from the selection of predictor fields, the modelling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice by using both single- and mixed-field predictors of the variables air temperature (1000 hPa), specific humidity (1000 hPa), and zonal wind speed (500 hPa). The chosen downscaling domain ranges from 80 to 50 degrees west and from 0 to 20 degrees south. Statistical transfer functions are derived individually for different months and times of day (month/hour-models). The forecast skill of the month/hour-models largely depends on month and time of day, ranging from 0 to 0.8, but the mixed-field predictors generally perform better than the single-field predictors. At all time scales, the ESD model shows added value against two simple reference models; (i) the direct use of reanalysis grid point values, and (ii) mean diurnal and seasonal cycles over the calibration period. The ESD model forecast 1960 to 2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation, but is sensitive to the chosen predictor type. So far, we have not assessed the performance of NCEP/NCAR reanalysis data against other reanalysis products. The developed ESD model is computationally cheap and applicable wherever measurements are available for model calibration.

  11. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test ofmore » the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.« less

  12. Multinomial logistic regression modelling of obesity and overweight among primary school students in a rural area of Negeri Sembilan

    NASA Astrophysics Data System (ADS)

    Ghazali, Amirul Syafiq Mohd; Ali, Zalila; Noor, Norlida Mohd; Baharum, Adam

    2015-10-01

    Multinomial logistic regression is widely used to model the outcomes of a polytomous response variable, a categorical dependent variable with more than two categories. The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of predictor variables. Its procedure gives a number of logistic regression models that make specific comparisons of the response categories. When there are q categories of the response variable, the model consists of q-1 logit equations which are fitted simultaneously. The model is validated by variable selection procedures, tests of regression coefficients, a significant test of the overall model, goodness-of-fit measures, and validation of predicted probabilities using odds ratio. This study used the multinomial logistic regression model to investigate obesity and overweight among primary school students in a rural area on the basis of their demographic profiles, lifestyles and on the diet and food intake. The results indicated that obesity and overweight of students are related to gender, religion, sleep duration, time spent on electronic games, breakfast intake in a week, with whom meals are taken, protein intake, and also, the interaction between breakfast intake in a week with sleep duration, and the interaction between gender and protein intake.

  13. Multiple linear regression analysis

    NASA Technical Reports Server (NTRS)

    Edwards, T. R.

    1980-01-01

    Program rapidly selects best-suited set of coefficients. User supplies only vectors of independent and dependent data and specifies confidence level required. Program uses stepwise statistical procedure for relating minimal set of variables to set of observations; final regression contains only most statistically significant coefficients. Program is written in FORTRAN IV for batch execution and has been implemented on NOVA 1200.

  14. Probabilistic micromechanics for metal matrix composites

    NASA Astrophysics Data System (ADS)

    Engelstad, S. P.; Reddy, J. N.; Hopkins, Dale A.

    A probabilistic micromechanics-based nonlinear analysis procedure is developed to predict and quantify the variability in the properties of high temperature metal matrix composites. Monte Carlo simulation is used to model the probabilistic distributions of the constituent level properties including fiber, matrix, and interphase properties, volume and void ratios, strengths, fiber misalignment, and nonlinear empirical parameters. The procedure predicts the resultant ply properties and quantifies their statistical scatter. Graphite copper and Silicon Carbide Titanlum Aluminide (SCS-6 TI15) unidirectional plies are considered to demonstrate the predictive capabilities. The procedure is believed to have a high potential for use in material characterization and selection to precede and assist in experimental studies of new high temperature metal matrix composites.

  15. CRISM Hyperspectral Data Filtering with Application to MSL Landing Site Selection

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Parente, M.; Clark, T.; Morgan, F.; Barnouin-Jha, O. S.; McGovern, A.; Murchie, S. L.; Taylor, H.

    2009-12-01

    We report on the development and implementation of a custom filtering procedure for Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) IR hyperspectral data that is suitable for incorporation into the CRISM Reduced Data Record (RDR) calibration pipeline. Over the course of the Mars Reconnaissance Orbiter (MRO) Primary Science Phase (PSP) and the ongoing Extended Science Phase (ESP) CRISM has operated with an IR detector temperature between ~107 K and ~127 K. This ~20 K range in operational temperature has resulted in variable data quality, with observations acquired at higher detector temperatures exhibiting a marked increase in both systematic and stochastic noise. The CRISM filtering procedure consists of two main data processing capabilities. The primary systematic noise component in CRISM IR data appears as along track or column oriented striping. This is addressed by the robust derivation and application of an inter-column ratio correction frame. The correction frame is developed through the serial evaluation of band specific column ratio statistics and so does not compromise the spectral fidelity of the image cube. The dominant CRISM IR stochastic noise components appear as isolated data spikes or column oriented segments of variable length with erroneous data values. The non-systematic noise is identified and corrected through the application of an iterative-recursive kernel modeling procedure which employs a formal statistical outlier test as the iteration control and recursion termination criterion. This allows the filtering procedure to make a statistically supported determination between high frequency (spatial/spectral) signal and high frequency noise based on the information content of a given multidimensional data kernel. The governing statistical test also allows the kernel filtering procedure to be self regulating and adaptive to the intrinsic noise level in the data. The CRISM IR filtering procedure is scheduled to be incorporated into the next augmentation of the CRISM IR calibration (version 3). The filtering algorithm will be applied to the I/F data (IF) delivered to the Planetary Data System (PDS), but the radiance on sensor data (RA) will remain unfiltered. The development of CRISM hyperspectral analysis products in support of the Mars Science Laboratory (MSL) landing site selection process has motivated the advance of CRISM-specific data processing techniques. The quantitative results of the CRISM IR filtering procedure as applied to CRISM observations acquired in support of MSL landing site selection will be presented.

  16. mfpa: Extension of mfp using the ACD covariate transformation for enhanced parametric multivariable modeling.

    PubMed

    Royston, Patrick; Sauerbrei, Willi

    2016-01-01

    In a recent article, Royston (2015, Stata Journal 15: 275-291) introduced the approximate cumulative distribution (acd) transformation of a continuous covariate x as a route toward modeling a sigmoid relationship between x and an outcome variable. In this article, we extend the approach to multivariable modeling by modifying the standard Stata program mfp. The result is a new program, mfpa, that has all the features of mfp plus the ability to fit a new model for user-selected covariates that we call fp1( p 1 , p 2 ). The fp1( p 1 , p 2 ) model comprises the best-fitting combination of a dimension-one fractional polynomial (fp1) function of x and an fp1 function of acd ( x ). We describe a new model-selection algorithm called function-selection procedure with acd transformation, which uses significance testing to attempt to simplify an fp1( p 1 , p 2 ) model to a submodel, an fp1 or linear model in x or in acd ( x ). The function-selection procedure with acd transformation is related in concept to the fsp (fp function-selection procedure), which is an integral part of mfp and which is used to simplify a dimension-two (fp2) function. We describe the mfpa command and give univariable and multivariable examples with real data to demonstrate its use.

  17. Measuring cardiac waste: the premier cardiac waste measures.

    PubMed

    Lowe, Timothy J; Partovian, Chohreh; Kroch, Eugene; Martin, John; Bankowitz, Richard

    2014-01-01

    The authors developed 8 measures of waste associated with cardiac procedures to assist hospitals in comparing their performance with peer facilities. Measure selection was based on review of the research literature, clinical guidelines, and consultation with key stakeholders. Development and validation used the data from 261 hospitals in a split-sample design. Measures were risk adjusted using Premier's CareScience methodologies or mean peer value based on Medicare Severity Diagnosis-Related Group assignment. High variability was found in resource utilization across facilities. Validation of the measures using item-to-total correlations (range = 0.27-0.78), Cronbach α (.88), and Spearman rank correlation (0.92) showed high reliability and discriminatory power. Because of the level of variability observed among hospitals, this study suggests that there is opportunity for facilities to design successful waste reduction programs targeting cardiac-device procedures.

  18. Shade selection performed by novice dental professionals and colorimeter.

    PubMed

    Klemetti, E; Matela, A-M; Haag, P; Kononen, M

    2006-01-01

    The objective of this study was to test inter-observer variability in shade selection for porcelain restorations, using three different shade guides: Vita Lumin Vacuum, Vita 3D-Master and Procera. Nineteen young dental professionals acted as observers. The results were also compared with those of a digital colorimeter (Shade Eye Ex; Shofu, Japan). Regarding repeatability, no significant differences were found between the three shade guides, although repeatability was relatively low (33-43%). Agreement with the colorimetric results was also low (8-34%). In conclusion, shade selection shows moderate to great inter-observer variation. In teaching and standardizing the shade selection procedure, a digital colorimeter may be a useful educational tool.

  19. Estimation of selected seasonal streamflow statistics representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.; Atkins, John T.

    2010-01-01

    Regional equations and procedures were developed for estimating seasonal 1-day 10-year, 7-day 10-year, and 30-day 5-year hydrologically based low-flow frequency values for unregulated streams in West Virginia. Regional equations and procedures also were developed for estimating the seasonal U.S. Environmental Protection Agency harmonic-mean flows and the 50-percent flow-duration values. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Regional equations were developed using ordinary least squares regression using statistics from 117 U.S. Geological Survey continuous streamgage stations as dependent variables and basin characteristics as independent variables. Equations for three regions in West Virginia-North, South-Central, and Eastern Panhandle Regions-were determined. Drainage area, average annual precipitation, and longitude of the basin centroid are significant independent variables in one or more of the equations. The average standard error of estimates for the equations ranged from 12.6 to 299 percent. Procedures developed to estimate the selected seasonal streamflow statistics in this study are applicable only to rural, unregulated streams within the boundaries of West Virginia that have independent variables within the limits of the stations used to develop the regional equations: drainage area from 16.3 to 1,516 square miles in the North Region, from 2.78 to 1,619 square miles in the South-Central Region, and from 8.83 to 3,041 square miles in the Eastern Panhandle Region; average annual precipitation from 42.3 to 61.4 inches in the South-Central Region and from 39.8 to 52.9 inches in the Eastern Panhandle Region; and longitude of the basin centroid from 79.618 to 82.023 decimal degrees in the North Region. All estimates of seasonal streamflow statistics are representative of the period from the 1930 to the 2002 climatic year.

  20. Kepler AutoRegressive Planet Search: Motivation & Methodology

    NASA Astrophysics Data System (ADS)

    Caceres, Gabriel; Feigelson, Eric; Jogesh Babu, G.; Bahamonde, Natalia; Bertin, Karine; Christen, Alejandra; Curé, Michel; Meza, Cristian

    2015-08-01

    The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Auto-Regressive Moving-Average (ARMA) models, Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH), and related models are flexible, phenomenological methods used with great success to model stochastic temporal behaviors in many fields of study, particularly econometrics. Powerful statistical methods are implemented in the public statistical software environment R and its many packages. Modeling involves maximum likelihood fitting, model selection, and residual analysis. These techniques provide a useful framework to model stellar variability and are used in KARPS with the objective of reducing stellar noise to enhance opportunities to find as-yet-undiscovered planets. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; ARMA-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. We apply the procedures to simulated Kepler-like time series with known stellar and planetary signals to evaluate the effectiveness of the KARPS procedures. The ARMA-type modeling is effective at reducing stellar noise, but also reduces and transforms the transit signal into ingress/egress spikes. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. We also illustrate the efficient coding in R.

  1. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.

  2. Improve projections of changes in southern African summer rainfall through comprehensive multi-timescale empirical statistical downscaling

    NASA Astrophysics Data System (ADS)

    Dieppois, B.; Pohl, B.; Eden, J.; Crétat, J.; Rouault, M.; Keenlyside, N.; New, M. G.

    2017-12-01

    The water management community has hitherto neglected or underestimated many of the uncertainties in climate impact scenarios, in particular, uncertainties associated with decadal climate variability. Uncertainty in the state-of-the-art global climate models (GCMs) is time-scale-dependant, e.g. stronger at decadal than at interannual timescales, in response to the different parameterizations and to internal climate variability. In addition, non-stationarity in statistical downscaling is widely recognized as a key problem, in which time-scale dependency of predictors plays an important role. As with global climate modelling, therefore, the selection of downscaling methods must proceed with caution to avoid unintended consequences of over-correcting the noise in GCMs (e.g. interpreting internal climate variability as a model bias). GCM outputs from the Coupled Model Intercomparison Project 5 (CMIP5) have therefore first been selected based on their ability to reproduce southern African summer rainfall variability and their teleconnections with Pacific sea-surface temperature across the dominant timescales. In observations, southern African summer rainfall has recently been shown to exhibit significant periodicities at the interannual timescale (2-8 years), quasi-decadal (8-13 years) and inter-decadal (15-28 years) timescales, which can be interpret as the signature of ENSO, the IPO, and the PDO over the region. Most of CMIP5 GCMs underestimate southern African summer rainfall variability and their teleconnections with Pacific SSTs at these three timescales. In addition, according to a more in-depth analysis of historical and pi-control runs, this bias is might result from internal climate variability in some of the CMIP5 GCMs, suggesting potential for bias-corrected prediction based empirical statistical downscaling. A multi-timescale regression based downscaling procedure, which determines the predictors across the different timescales, has thus been used to simulate southern African summer rainfall. This multi-timescale procedure shows much better skills in simulating decadal timescales of variability compared to commonly used statistical downscaling approaches.

  3. From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health.

    PubMed

    Fock, Heino O; Kraus, Gerd

    2016-01-01

    Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems.

  4. From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health

    PubMed Central

    Kraus, Gerd

    2016-01-01

    Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems. PMID:27509185

  5. Surgeon and type of anesthesia predict variability in surgical procedure times.

    PubMed

    Strum, D P; Sampson, A R; May, J H; Vargas, L G

    2000-05-01

    Variability in surgical procedure times increases the cost of healthcare delivery by increasing both the underutilization and overutilization of expensive surgical resources. To reduce variability in surgical procedure times, we must identify and study its sources. Our data set consisted of all surgeries performed over a 7-yr period at a large teaching hospital, resulting in 46,322 surgical cases. To study factors associated with variability in surgical procedure times, data mining techniques were used to segment and focus the data so that the analyses would be both technically and intellectually feasible. The data were subdivided into 40 representative segments of manageable size and variability based on headers adopted from the common procedural terminology classification. Each data segment was then analyzed using a main-effects linear model to identify and quantify specific sources of variability in surgical procedure times. The single most important source of variability in surgical procedure times was surgeon effect. Type of anesthesia, age, gender, and American Society of Anesthesiologists risk class were additional sources of variability. Intrinsic case-specific variability, unexplained by any of the preceding factors, was found to be highest for shorter surgeries relative to longer procedures. Variability in procedure times among surgeons was a multiplicative function (proportionate to time) of surgical time and total procedure time, such that as procedure times increased, variability in surgeons' surgical time increased proportionately. Surgeon-specific variability should be considered when building scheduling heuristics for longer surgeries. Results concerning variability in surgical procedure times due to factors such as type of anesthesia, age, gender, and American Society of Anesthesiologists risk class may be extrapolated to scheduling in other institutions, although specifics on individual surgeons may not. This research identifies factors associated with variability in surgical procedure times, knowledge of which may ultimately be used to improve surgical scheduling and operating room utilization.

  6. Feasibility of Real-Time Selection of Frequency Tables in an Acoustic Simulation of a Cochlear Implant

    PubMed Central

    Fitzgerald, Matthew; Sagi, Elad; Morbiwala, Tasnim A.; Tan, Chin-Tuan; Svirsky, Mario A.

    2013-01-01

    Objectives Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CI), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. Design 34 normal-hearing adults listened to a noise-vocoded acoustic simulation of a cochlear implant and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed us to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, we measured CNC word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most cochlear implant speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. Results Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2–3 minutes for each trial, and the between-trial variability was comparable to that previously observed with closely-related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. Conclusions Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable to that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure. PMID:23807089

  7. Quality control developments for graphite/PMR15 polyimide composites materials

    NASA Technical Reports Server (NTRS)

    Sheppard, C. H.; Hoggatt, J. T.

    1979-01-01

    The problem of lot-to-lot and within-lot variability of graphite/PMR-15 prepreg was investigated. The PMR-15 chemical characterization data were evaluated along with the processing conditions controlling the manufacture of PMR-15 resin and monomers. Manufacturing procedures were selected to yield a consistently reproducible graphite prepreg that could be processed into acceptable structural elements.

  8. Evaluation of Model Specification, Variable Selection, and Adjustment Methods in Relation to Propensity Scores and Prognostic Scores in Multilevel Data

    ERIC Educational Resources Information Center

    Yu, Bing; Hong, Guanglei

    2012-01-01

    This study uses simulation examples representing three types of treatment assignment mechanisms in data generation (the random intercept and slopes setting, the random intercept setting, and a third setting with a cluster-level treatment and an individual-level outcome) in order to determine optimal procedures for reducing bias and improving…

  9. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  10. Reward and uncertainty in exploration programs

    NASA Technical Reports Server (NTRS)

    Kaufman, G. M.; Bradley, P. G.

    1971-01-01

    A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.

  11. Total cost comparison of 2 biopsy methods for nonpalpable breast lesions.

    PubMed

    Bodai, B I; Boyd, B; Brown, L; Wadley, H; Zannis, V J; Holzman, M

    2001-05-01

    To identify, quantify, and compare total facility costs for 2 breast biopsy methods: vacuum-assisted biopsy (VAB) and needle-wire-localized open surgical biopsy (OSB). A time-and-motion study was done to identify unit resources used in both procedures. Costs were imputed from published literature to value resources. A comparison of the total (fixed and variable) costs of the 2 procedures was done. A convenience sample of 2 high-volume breast biopsy (both VAB and OSB) facilities was identified. A third facility (OSB only) and 8 other sites (VAB only) were used to capture variation. Staff interviews, patient medical records, and billing data were used to check observed data. One hundred and sixty-seven uncomplicated procedures (71 OSBs, 96 VABs) were observed. Available demographic and clinical data were analyzed to assess selection bias, and sensitivity analyses were done on the main assumptions. The total facility costs of the VAB procedure were lower than the costs of the OSB procedure. The overall cost advantage for using VAB ranges from $314 to $843 per procedure depending on the facility type. Variable cost comparison indicated little difference between the 2 procedures. The largest fixed cost difference was $763. Facilities must consider the cost of new technology, especially when the new technology is as effective as the present technology. The seemingly high cost of equipment might negatively influence a decision to adopt VAB, but when total facility costs were analyzed, the new technology was less costly.

  12. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Prediction of municipal solid waste generation using nonlinear autoregressive network.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A

    2015-12-01

    Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.

  14. Feature Screening in Ultrahigh Dimensional Cox's Model.

    PubMed

    Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne

    Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.

  15. Nonlinear probabilistic finite element models of laminated composite shells

    NASA Technical Reports Server (NTRS)

    Engelstad, S. P.; Reddy, J. N.

    1993-01-01

    A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.

  16. Computational Hemodynamics Involving Artificial Devices

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kiris, Cetin; Feiereisen, William (Technical Monitor)

    2001-01-01

    This paper reports the progress being made towards developing complete blood flow simulation capability in human, especially, in the presence of artificial devices such as valves and ventricular assist devices. Devices modeling poses unique challenges different from computing the blood flow in natural hearts and arteries. There are many elements needed such as flow solvers, geometry modeling including flexible walls, moving boundary procedures and physiological characterization of blood. As a first step, computational technology developed for aerospace applications was extended in the recent past to the analysis and development of mechanical devices. The blood flow in these devices is practically incompressible and Newtonian, and thus various incompressible Navier-Stokes solution procedures can be selected depending on the choice of formulations, variables and numerical schemes. Two primitive variable formulations used are discussed as well as the overset grid approach to handle complex moving geometry. This procedure has been applied to several artificial devices. Among these, recent progress made in developing DeBakey axial flow blood pump will be presented from computational point of view. Computational and clinical issues will be discussed in detail as well as additional work needed.

  17. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    USGS Publications Warehouse

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  18. Comparison of weighed food record procedures for the reference methods in two validation studies of food frequency questionnaires.

    PubMed

    Ishii, Yuri; Ishihara, Junko; Takachi, Ribeka; Shinozawa, Yurie; Imaeda, Nahomi; Goto, Chiho; Wakai, Kenji; Takahashi, Toshiaki; Iso, Hiroyasu; Nakamura, Kazutoshi; Tanaka, Junta; Shimazu, Taichi; Yamaji, Taiki; Sasazuki, Shizuka; Sawada, Norie; Iwasaki, Motoki; Mikami, Haruo; Kuriki, Kiyonori; Naito, Mariko; Okamoto, Naoko; Kondo, Fumi; Hosono, Satoyo; Miyagawa, Naoko; Ozaki, Etsuko; Katsuura-Kamano, Sakurako; Ohnaka, Keizo; Nanri, Hinako; Tsunematsu-Nakahata, Noriko; Kayama, Takamasa; Kurihara, Ayako; Kojima, Shiomi; Tanaka, Hideo; Tsugane, Shoichiro

    2017-07-01

    Although open-ended dietary assessment methods, such as weighed food records (WFRs), are generally considered to be comparable, differences between procedures may influence outcome when WFRs are conducted independently. In this paper, we assess the procedures of WFRs in two studies to describe their dietary assessment procedures and compare the subsequent outcomes. WFRs of 12 days (3 days for four seasons) were conducted as reference methods for intake data, in accordance with the study protocol, among a subsample of participants of two large cohort studies. We compared the WFR procedures descriptively. We also compared some dietary intake variables, such as the frequency of foods and dishes and contributing foods, to determine whether there were differences in the portion size distribution and intra- and inter-individual variation in nutrient intakes caused by the difference in procedures. General procedures of the dietary records were conducted in accordance with the National Health and Nutrition Survey and were the same for both studies. Differences were seen in 1) selection of multiple days (non-consecutive days versus consecutive days); and 2) survey sheet recording method (individual versus family participation). However, the foods contributing to intake of energy and selected nutrients, the portion size distribution, and intra- and inter-individual variation in nutrient intakes were similar between the two studies. Our comparison of WFR procedures in two independent studies revealed several differences. Notwithstanding these procedural differences, however, the subsequent outcomes were similar. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  19. The Fisher-Markov selector: fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data.

    PubMed

    Cheng, Qiang; Zhou, Hongbo; Cheng, Jie

    2011-06-01

    Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selector--which we call the Fisher-Markov selector--to identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid--dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

  20. Choosing a Surgeon: An Exploratory Study of Factors Influencing Selection of a Gender Affirmation Surgeon.

    PubMed

    Ettner, Randi; Ettner, Frederic; White, Tonya

    2016-01-01

    Purpose: Selecting a healthcare provider is often a complicated process. Many factors appear to govern the decision as to how to select the provider in the patient-provider relationship. While the possibility of changing primary care physicians or specialists exists, decisions regarding surgeons are immutable once surgery has been performed. This study is an attempt to assess the importance attached to various factors involved in selecting a surgeon to perform gender affirmation surgery (GAS). It was hypothesized that owing to the intimate nature of the surgery, the expense typically involved, the emotional meaning attached to the surgery, and other variables, decisions regarding choice of surgeon for this procedure would involve factors other than those that inform more typical healthcare provider selection or surgeon selection for other plastic/reconstructive procedures. Methods: Questionnaires were distributed to individuals who had undergone GAS and individuals who had undergone elective plastic surgery to assess decision-making. Results: The results generally confirm previous findings regarding how patients select providers. Conclusion: Choosing a surgeon to perform gender-affirming surgery is a challenging process, but patients are quite rational in their decision-making. Unlike prior studies, we did not find a preference for gender-concordant surgeons, even though the surgery involves the genital area. Providing strategies and resources for surgical selection can improve patient satisfaction.

  1. Regression Simulation of Turbine Engine Performance - Accuracy Improvement (TASK IV)

    DTIC Science & Technology

    1978-09-30

    33 21 Generalized Form of the Regression Equation for the Optimized Polynomial Exponent M ethod...altitude, Mach number and power setting combinations were generated during the ARES evaluation. The orthogonal Latin Square selection procedure...pattern. In data generation , the low (L), mid (M), and high (H) values of a variable are not always the same. At some of the corner points where

  2. Surface water risk assessment of pesticides in Ethiopia.

    PubMed

    Teklu, Berhan M; Adriaanse, Paulien I; Ter Horst, Mechteld M S; Deneer, John W; Van den Brink, Paul J

    2015-03-01

    Scenarios for future use in the pesticide registration procedure in Ethiopia were designed for 3 separate Ethiopian locations, which are aimed to be protective for the whole of Ethiopia. The scenarios estimate concentrations in surface water resulting from agricultural use of pesticides for a small stream and for two types of small ponds. Seven selected pesticides were selected since they were estimated to bear the highest risk to humans on the basis of volume of use, application rate and acute and chronic human toxicity, assuming exposure as a result of the consumption of surface water. Potential ecotoxicological risks were not considered as a selection criterion at this stage. Estimates of exposure concentrations in surface water were established using modelling software also applied in the EU registration procedure (PRZM and TOXSWA). Input variables included physico-chemical properties, and data such as crop calendars, irrigation schedules, meteorological information and detailed application data which were specifically tailored to the Ethiopian situation. The results indicate that for all the pesticides investigated the acute human risk resulting from the consumption of surface water is low to negligible, whereas agricultural use of chlorothalonil, deltamethrin, endosulfan and malathion in some crops may result in medium to high risk to aquatic species. The predicted environmental concentration estimates are based on procedures similar to procedures used at the EU level and in the USA. Addition of aquatic macrophytes as an ecotoxicological endpoint may constitute a welcome future addition to the risk assessment procedure. Implementation of the methods used for risk characterization constitutes a good step forward in the pesticide registration procedure in Ethiopia. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  4. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  5. "Congratulations, you have been randomized into the control group!(?)": issues to consider when recruiting schools for matched-pair randomized control trials of prevention programs.

    PubMed

    Ji, Peter; DuBois, David L; Flay, Brian R; Brechling, Vanessa

    2008-03-01

    Recruiting schools into a matched-pair randomized control trial (MP-RCT) to evaluate the efficacy of a school-level prevention program presents challenges for researchers. We considered which of 2 procedures would be most effective for recruiting schools into the study and assigning them to conditions. In 1 procedure (recruit and match/randomize), we would recruit schools and match them prior to randomization, and in the other (match/randomize and recruitment), we would match schools and randomize them prior to recruitment. We considered how each procedure impacted the randomization process and our ability to recruit schools into the study. After implementing the selected procedure, the equivalence of both treatment and control group schools and the participating and nonparticipating schools on school demographic variables was evaluated. We decided on the recruit and match/randomize procedure because we thought it would provide the opportunity to build rapport with the schools and prepare them for the randomization process, thereby increasing the likelihood that they would accept their randomly assigned conditions. Neither the treatment and control group schools nor the participating and nonparticipating schools exhibited statistically significant differences from each other on any of the school demographic variables. Recruitment of schools prior to matching and randomization in an MP-RCT may facilitate the recruitment of schools and thus enhance both the statistical power and the representativeness of study findings. Future research would benefit from the consideration of a broader range of variables (eg, readiness to implement a comprehensive prevention program) both in matching schools and in evaluating their representativeness to nonparticipating schools.

  6. Semiconductor lasers vs LEDs in diagnostic and therapeutic medicine

    NASA Astrophysics Data System (ADS)

    Gryko, Lukasz; Zajac, Andrzej; Szymanska, Justyna; Blaszczak, Urszula; Palkowska, Anna; Kulesza, Ewa

    2016-12-01

    Semiconductor emitters are used in many areas of medicine, allowing for new methods of diagnosis, treatment and effective prevention of many diseases. The article presents selected areas of application of semiconductor sources in UVVIS- NIR range, where in recent years competition in semiconductor lasers and LEDs applications has been observed. Examples of applications of analyzed sources are indicated for LLLT, PDT and optical diagnostics using the procedure of color contrast. Selected results of LLLT research of the authors are presented that were obtained by means of the developed optoelectronic system for objectified irradiation and studies on the impact of low-energy laser and LED on lines of endothelial cells of umbilical vein. Usefulness of the spectrally tunable LED lighting system for diagnostic purposes is also demonstrated, also as an illuminator for surface applications - in procedure of variable color contrast of the illuminated object.

  7. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    NASA Astrophysics Data System (ADS)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  8. A reliability-based cost effective fail-safe design procedure

    NASA Technical Reports Server (NTRS)

    Hanagud, S.; Uppaluri, B.

    1976-01-01

    The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.

  9. Simultaneous grouping and ranking with combination of SOM and TOPSIS for selection of preferable analytical procedure for furan determination in food.

    PubMed

    Jędrkiewicz, Renata; Tsakovski, Stefan; Lavenu, Aurore; Namieśnik, Jacek; Tobiszewski, Marek

    2018-02-01

    Novel methodology for grouping and ranking with application of self-organizing maps and multicriteria decision analysis is presented. The dataset consists of 22 objects that are analytical procedures applied to furan determination in food samples. They are described by 10 variables, referred to their analytical performance, environmental and economic aspects. Multivariate statistics analysis allows to limit the amount of input data for ranking analysis. Assessment results show that the most beneficial procedures are based on microextraction techniques with GC-MS final determination. It is presented how the information obtained from both tools complement each other. The applicability of combination of grouping and ranking is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. A critical review of field techniques employed in the survey of large woody debris in river corridors: a central European perspective.

    PubMed

    Máčka, Zdeněk; Krejčí, Lukáš; Loučková, Blanka; Peterková, Lucie

    2011-10-01

    In forested watersheds, large woody debris (LWD) is an integral component of river channels and floodplains. Fallen trees have a significant impact on physical and ecological processes in fluvial ecosystems. An enormous body of literature concerning LWD in river corridors is currently available. However, synthesis and statistical treatment of the published data are hampered by the heterogeneity of methodological approaches. Likewise, the precision and accuracy of data arising out of published surveys have yet to be assessed. For this review, a literature scrutiny of 100 randomly selected research papers was made to examine the most frequently surveyed LWD variables and field procedures. Some 29 variables arose for individual LWD pieces, and 15 variables for wood accumulations. The literature survey revealed a large variability in field procedures for LWD surveys. In many studies (32), description of field procedure proved less than adequate, rendering the results impossible to reproduce in comparable fashion by other researchers. This contribution identifies the main methodological problems and sources of error associated with the mapping and measurement of the most frequently surveyed variables of LWD, both as individual pieces and in accumulations. The discussion stems from our own field experience with LWD survey in river systems of various geomorphic styles and types of riparian vegetation in the Czech Republic in the 2004-10 period. We modelled variability in terms of LWD number, volume, and biomass for three geomorphologically contrasting river systems. The results appeared to be sensitive, in the main, to sampling strategy and prevailing field conditions; less variability was produced by errors of measurement. Finally, we propose a comprehensive standard field procedure for LWD surveyors, including a total of 20 variables describing spatial position, structural characteristics and the functions and dynamics of LWD. However, resources are only rarely available for highly time-demanding surveys. We therefore include a set of core LWD metrics for routine baseline surveys of individual LWD pieces (diameter, length, rootwad size, preservation of branches and rootwad, geomorphological/ecological function, stability/mobility) and wood accumulations (number of LWD pieces, geometrical dimensions, channel blockage, wood/air ratio), which may provide useful background information for river management, hydromorphological assessment, habitat evaluation, and inter-regional comparisons.

  11. An improved standardization procedure to remove systematic low frequency variability biases in GCM simulations

    NASA Astrophysics Data System (ADS)

    Mehrotra, Rajeshwar; Sharma, Ashish

    2012-12-01

    The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.

  12. Financial Management and Control for Decision Making in Urban Local Bodies in India Using Statistical Techniques

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Sidhakam; Bandyopadhyay, Gautam

    2010-10-01

    The council of most of the Urban Local Bodies (ULBs) has a limited scope for decision making in the absence of appropriate financial control mechanism. The information about expected amount of own fund during a particular period is of great importance for decision making. Therefore, in this paper, efforts are being made to present set of findings and to establish a model of estimating receipts of own sources and payments thereof using multiple regression analysis. Data for sixty months from a reputed ULB in West Bengal have been considered for ascertaining the regression models. This can be used as a part of financial management and control procedure by the council to estimate the effect on own fund. In our study we have considered two models using multiple regression analysis. "Model I" comprises of total adjusted receipt as the dependent variable and selected individual receipts as the independent variables. Similarly "Model II" consists of total adjusted payments as the dependent variable and selected individual payments as independent variables. The resultant of Model I and Model II is the surplus or deficit effecting own fund. This may be applied for decision making purpose by the council.

  13. Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.

    PubMed

    Jiang, Yuan; He, Yunxiao; Zhang, Heping

    LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.

  14. Periodontal regeneration around natural teeth.

    PubMed

    Garrett, S

    1996-11-01

    1. Evidence is conclusive (Table 2) that periodontal regeneration in humans is possible following the use of bone grafts, guided tissue regeneration procedures, both without and in combination with bone grafts, and root demineralization procedures. 2. Clinically guided tissue regeneration procedures have demonstrated significant positive clinical change beyond that achieved with debridement alone in treating mandibular and maxillary (buccal only) Class II furcations. Similar data exist for intraosseous defects. Evidence suggests that the use of bone grafts or GTR procedures produce equal clinical benefit in treating intraosseous defects. Further research is necessary to evaluate GTR procedures compared to, or combined with, bone grafts in treating intraosseous defects. 3. Although there are some data suggesting hopeful results in Class II furcations, the clinical advantage of procedures combining present regenerative techniques remains to be demonstrated. Additional randomized controlled trials with sufficient power are needed to demonstrate the potential usefulness of these techniques. 4. Outcomes following regenerative attempts remain somewhat variable with differences in results between studies and individual subjects. Some of this variability is likely patient related in terms of compliance with plaque control and maintenance procedures, as well as personal habits; e.g., smoking. Variations in the defects selected for study may also affect predictability of outcomes along with other factors. 5. There is evidence to suggest that present regenerative techniques lead to significant amounts of regeneration at localized sites on specific teeth. However, if complete regeneration is to become a reality, additional stimuli to enhance the regenerative process are likely needed. Perhaps this will be accomplished in the future, with combined procedures that include appropriate polypeptide growth factors or tissue factors to provide additional stimulus.

  15. The impact of functional analysis methodology on treatment choice for self-injurious and aggressive behavior.

    PubMed Central

    Pelios, L; Morren, J; Tesch, D; Axelrod, S

    1999-01-01

    Self-injurious behavior (SIB) and aggression have been the concern of researchers because of the serious impact these behaviors have on individuals' lives. Despite the plethora of research on the treatment of SIB and aggressive behavior, the reported findings have been inconsistent regarding the effectiveness of reinforcement-based versus punishment-based procedures. We conducted a literature review to determine whether a trend could be detected in researchers' selection of reinforcement-based procedures versus punishment-based procedures, particularly since the introduction of functional analysis to behavioral assessment. The data are consistent with predictions made in the past regarding the potential impact of functional analysis methodology. Specifically, the findings indicate that, once maintaining variables for problem behavior are identified, experimenters tend to choose reinforcement-based procedures rather than punishment-based procedures as treatment for both SIB and aggressive behavior. Results indicated an increased interest in studies on the treatment of SIB and aggressive behavior, particularly since 1988. PMID:10396771

  16. Stochastic Vehicle Mobility Forecasts Using the NATO Reference Mobility Model. Report 1. Basic Concepts and Procedures

    DTIC Science & Technology

    1992-08-01

    operations and feports, III )"NeforDvis, Highway uteI24, Arigt’on VA r ?0.3 anto the Offfice of Management and Bludgeti. Paflerwork Reduction Project (0704.0...variables jaw*" 50P* (S/ M1 / ir5 (IMMw / M1 / a 0,06.-O 001 324 Table 4 Coniectured Statistical Attributes of Parameters Selected for the Error

  17. An empirical examination of the effects of family commitment in education on student achievement in seventh grade science

    NASA Astrophysics Data System (ADS)

    Wang, Jianjun; Wildman, Louis

    A national data base from the Longitudinal Study of American Youth (LSAY) was employed to examine the effects of family commitment in education on student achievement in seventh grade science. The backward elimination procedure in the Statistical Analysis System (SAS) was adopted in this study to select significant variables of family commitment at = .05. The results show that around 22% of the variance in student science achievement can be explained by the selected significant LSAY variables. An analysis of the impact of family commitment seems to indicate that parental education and encouragement are important factors in the improvement of student achievement. However, educators, including school personnel and parents, should exercise caution regarding how they help students with their homework and how they reward students for good grades.Received: 14 June 1994; Revised: 31 October 1994;

  18. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  19. Predictors of professional behaviour and academic outcomes in a UK medical school: A longitudinal cohort study.

    PubMed

    Adam, Jane; Bore, Miles; Childs, Roy; Dunn, Jason; Mckendree, Jean; Munro, Don; Powis, David

    2015-01-01

    Over the past 70 years, there has been a recurring debate in the literature and in the popular press about how best to select medical students. This implies that we are still not getting it right: either some students are unsuited to medicine or the graduating doctors are considered unsatisfactory, or both. To determine whether particular variables at the point of selection might distinguish those more likely to become satisfactory professional doctors, by following a complete intake cohort of students throughout medical school and analysing all the data used for the students' selection, their performance on a range of other potential selection tests, academic and clinical assessments throughout their studies, and records of professional behaviour covering the entire five years of the course. A longitudinal database captured the following anonymised information for every student (n = 146) admitted in 2007 to the Hull York Medical School (HYMS) in the UK: demographic data (age, sex, citizenship); performance in each component of the selection procedure; performance in some other possible selection instruments (cognitive and non-cognitive psychometric tests); professional behaviour in tutorials and in other clinical settings; academic performance, clinical and communication skills at summative assessments throughout; professional behaviour lapses monitored routinely as part of the fitness-to-practise procedures. Correlations were sought between predictor variables and criterion variables chosen to demonstrate the full range of course outcomes from failure to complete the course to graduation with honours, and to reveal clinical and professional strengths and weaknesses. Student demography was found to be an important predictor of outcomes, with females, younger students and British citizens performing better overall. The selection variable "HYMS academic score", based on prior academic performance, was a significant predictor of components of Year 4 written and Year 5 clinical examinations. Some cognitive subtest scores from the UK Clinical Aptitude Test (UKCAT) and the UKCAT total score were also significant predictors of the same components, and a unique predictor of the Year 5 written examination. A number of the non-cognitive tests were significant independent predictors of Years 4 and 5 clinical performance, and of lapses in professional behaviour. First- and second-year tutor ratings were significant predictors of all outcomes, both desirable and undesirable. Performance in Years 1 and 2 written exams did not predict performance in Year 4 but did generally predict Year 5 written and clinical performance. Measures of a range of relevant selection attributes and personal qualities can predict intermediate and end of course achievements in academic, clinical and professional behaviour domains. In this study HYMS academic score, some UKCAT subtest scores and the total UKCAT score, and some non-cognitive tests completed at the outset of studies, together predicted outcomes most comprehensively. Tutor evaluation of students early in the course also identified the more and less successful students in the three domains of academic, clinical and professional performance. These results may be helpful in informing the future development of selection tools.

  20. Correlation of ERTS MSS data and earth coordinate systems

    NASA Technical Reports Server (NTRS)

    Malila, W. A. (Principal Investigator); Hieber, R. H.; Mccleer, A. P.

    1973-01-01

    The author has identified the following significant results. Experience has revealed a problem in the analysis and interpretation of ERTS-1 multispectral scanner (MSS) data. The problem is one of accurately correlating ERTS-1 MSS pixels with analysis areas specified on aerial photographs or topographic maps for training recognition computers and/or evaluating recognition results. It is difficult for an analyst to accurately identify which ERTS-1 pixels on a digital image display belong to specific areas and test plots, especially when they are small. A computer-aided procedure to correlate coordinates from topographic maps and/or aerial photographs with ERTS-1 data coordinates has been developed. In the procedure, a map transformation from earth coordinates to ERTS-1 scan line and point numbers is calculated using selected ground control points nad the method of least squares. The map transformation is then applied to the earth coordinates of selected areas to obtain the corresponding ERTS-1 point and line numbers. An optional provision allows moving the boundaries of the plots inward by variable distances so the selected pixels will not overlap adjacent features.

  1. Virtual Reality as a Distraction Intervention to Relieve Pain and Distress During Medical Procedures: A Comprehensive Literature Review.

    PubMed

    Indovina, Paola; Barone, Daniela; Gallo, Luigi; Chirico, Andrea; De Pietro, Giuseppe; Antonio, Giordano

    2018-02-26

    This review aims to provide a framework for evaluating the utility of virtual reality (VR) as a distraction intervention to alleviate pain and distress during medical procedures. We firstly describe the theoretical bases underlying the VR analgesic and anxiolytic effects and define the main factors contributing to its efficacy, which largely emerged from studies on healthy volunteers. Then, we provide a comprehensive overview of the clinical trials using VR distraction during different medical procedures, such as burn injury treatments, chemotherapy, surgery, dental treatment, and other diagnostic and therapeutic procedures. A broad literature search was performed using as main terms "virtual reality", "distraction" and "pain". No date limit was applied and all the retrieved studies on immersive VR distraction during medical procedures were selected. VR has proven to be effective in reducing procedural pain, as almost invariably observed even in patients subjected to extremely painful procedures, such as patients with burn injuries undergoing wound care and physical therapy. Moreover, VR seemed to decrease cancer-related symptoms in different settings, including during chemotherapy. Only mild and infrequent side effects were observed. Despite these promising results, future long-term randomized controlled trials with larger sample sizes and evaluating not only self-report measures but also physiological variables are needed. Further studies are also required both to establish predictive factors to select patients who can benefit from VR distraction and to design hardware/software systems tailored to the specific needs of different patients and able to provide the greatest distraction at the lowest cost.

  2. Using an Android application to assess registration strategies in open hepatic procedures: a planning and simulation tool

    NASA Astrophysics Data System (ADS)

    Doss, Derek J.; Heiselman, Jon S.; Collins, Jarrod A.; Weis, Jared A.; Clements, Logan W.; Geevarghese, Sunil K.; Miga, Michael I.

    2017-03-01

    Sparse surface digitization with an optically tracked stylus for use in an organ surface-based image-to-physical registration is an established approach for image-guided open liver surgery procedures. However, variability in sparse data collections during open hepatic procedures can produce disparity in registration alignments. In part, this variability arises from inconsistencies with the patterns and fidelity of collected intraoperative data. The liver lacks distinct landmarks and experiences considerable soft tissue deformation. Furthermore, data coverage of the organ is often incomplete or unevenly distributed. While more robust feature-based registration methodologies have been developed for image-guided liver surgery, it is still unclear how variation in sparse intraoperative data affects registration. In this work, we have developed an application to allow surgeons to study the performance of surface digitization patterns on registration. Given the intrinsic nature of soft-tissue, we incorporate realistic organ deformation when assessing fidelity of a rigid registration methodology. We report the construction of our application and preliminary registration results using four participants. Our preliminary results indicate that registration quality improves as users acquire more experience selecting patterns of sparse intraoperative surface data.

  3. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  4. 24 CFR 983.51 - Owner proposal selection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Owner proposal selection procedures... proposal selection procedures. (a) Procedures for selecting PBV proposals. The PHA administrative plan must describe the procedures for owner submission of PBV proposals and for PHA selection of PBV proposals...

  5. Indices estimated using REML/BLUP and introduction of a super-trait for the selection of progenies in popcorn.

    PubMed

    Vittorazzi, C; Amaral Junior, A T; Guimarães, A G; Viana, A P; Silva, F H L; Pena, G F; Daher, R F; Gerhardt, I F S; Oliveira, G H F; Pereira, M G

    2017-09-27

    Selection indices commonly utilize economic weights, which become arbitrary genetic gains. In popcorn, this is even more evident due to the negative correlation between the main characteristics of economic importance - grain yield and popping expansion. As an option in the use of classical biometrics as a selection index, the optimal procedure restricted maximum likelihood/best linear unbiased predictor (REML/BLUP) allows the simultaneous estimation of genetic parameters and the prediction of genotypic values. Based on the mixed model methodology, the objective of this study was to investigate the comparative efficiency of eight selection indices estimated by REML/BLUP for the effective selection of superior popcorn families in the eighth intrapopulation recurrent selection cycle. We also investigated the efficiency of the inclusion of the variable "expanded popcorn volume per hectare" in the most advantageous selection of superior progenies. In total, 200 full-sib families were evaluated in two different areas in the North and Northwest regions of the State of Rio de Janeiro, Brazil. The REML/BLUP procedure resulted in higher estimated gains than those obtained with classical biometric selection index methodologies and should be incorporated into the selection of progenies. The following indices resulted in higher gains in the characteristics of greatest economic importance: the classical selection index/values attributed by trial, via REML/BLUP, and the greatest genotypic values/expanded popcorn volume per hectare, via REML. The expanded popcorn volume per hectare characteristic enabled satisfactory gains in grain yield and popping expansion; this characteristic should be considered super-trait in popcorn breeding programs.

  6. Selected options supporting use of the group embedded figures test in modeling achievement in clinical laboratory science programs.

    PubMed

    Powell, M E

    1995-01-01

    To identify, in light of predicted future shortages of allied-health personnel, student and curricular characteristics of clinical laboratory science (CLS) programs relevant to recruitment and retention at the baccalaureate level. Not applicable. Not applicable. Options for modeling achievement in CLS programs are developed, and designs and procedures for clarifying procedural questions are considered in a context of delivery of instruction for specialized curricula and skill development. Considerable attention is given to the potential for using the Group Embedded Figures Test (GEFT) in modeling, advising, designing curricula, and monitoring quality improvement of programs and graduates. Not applicable. Supporting evidence is supplied from the literature for options in developing an appropriate model for examining those salient variables known to have linkages to achievement. An argument is presented for better understanding of antecedent variables affecting achievement and retention of CLS students. In addition, a case is made for development of an appropriate model examining variables identified in the literature as being linked to achievement. Dynamic models based on these considerations should be developed chronologically from entry through graduation with emphasis on growth at year-end milestones.

  7. Low Survival Rates of Oral and Oropharyngeal Squamous Cell Carcinoma

    PubMed Central

    da Silva Júnior, Francisco Feliciano; dos Santos, Karine de Cássia Batista; Ferreira, Stefania Jeronimo

    2017-01-01

    Aim To assess the epidemiological and clinical factors that influence the prognosis of oral and oropharyngeal squamous cell carcinoma (SCC). Methods One hundred and twenty-one cases of oral and oropharyngeal SCC were selected. The survival curves for each variable were estimated using the Kaplan-Meier method. The Cox regression model was applied to assess the effect of the variables on survival. Results Cancers at an advanced stage were observed in 103 patients (85.1%). Cancers on the tongue were more frequent (23.1%). The survival analysis was 59.9% in one year, 40.7% in two years, and 27.8% in 5 years. There was a significant low survival rate linked to alcohol intake (p = 0.038), advanced cancer staging (p = 0.003), and procedures without surgery (p < 0.001). When these variables were included in the Cox regression model only surgery procedures (p = 0.005) demonstrated a significant effect on survival. Conclusion The findings suggest that patients who underwent surgery had a greater survival rate compared with those that did not. The low survival rates and the high percentage of patients diagnosed at advanced stages demonstrate that oral and oropharyngeal cancer patients should receive more attention. PMID:28638410

  8. The Simultaneous Medicina-Planck Experiment: data acquisition, reduction and first results

    NASA Astrophysics Data System (ADS)

    Procopio, P.; Massardi, M.; Righini, S.; Zanichelli, A.; Ricciardi, S.; Libardi, P.; Burigana, C.; Cuttaia, F.; Mack, K.-H.; Terenzi, L.; Villa, F.; Bonavera, L.; Morgante, G.; Trigilio, C.; Trombetti, T.; Umana, G.

    2011-10-01

    The Simultaneous Medicina-Planck Experiment (SiMPlE) is aimed at observing a selected sample of 263 extragalactic and Galactic sources with the Medicina 32-m single-dish radio telescope in the same epoch as the Planck satellite observations. The data, acquired with a frequency coverage down to 5 GHz and combined with Planck at frequencies above 30 GHz, will constitute a useful reference catalogue of bright sources over the whole Northern hemisphere. Furthermore, source observations performed in different epochs and comparisons with other catalogues will allow the investigation of source variabilities on different time-scales. In this work, we describe the sample selection, the ongoing data acquisition campaign, the data reduction procedures, the developed tools and the comparison with other data sets. We present 5 and 8.3 GHz data for the SiMPlE Northern sample, consisting of 79 sources with δ≥ 45° selected from our catalogue and observed during the first 6 months of the project. A first analysis of their spectral behaviour and long-term variability is also presented.

  9. Demographically Adjusted Groups for Equating Test Scores. Research Report. ETS RR-14-30

    ERIC Educational Resources Information Center

    Livingston, Samuel A.

    2014-01-01

    In this study, I investigated 2 procedures intended to create test-taker groups of equal ability by poststratifying on a composite variable created from demographic information. In one procedure, the stratifying variable was the composite variable that best predicted the test score. In the other procedure, the stratifying variable was the…

  10. Hyperspectral imaging for predicting the allicin and soluble solid content of garlic with variable selection algorithms and chemometric models.

    PubMed

    Rahman, Anisur; Faqeerzada, Mohammad A; Cho, Byoung-Kwan

    2018-03-14

    Allicin and soluble solid content (SSC) in garlic is the responsible for its pungent flavor and odor. However, current conventional methods such as the use of high-pressure liquid chromatography and a refractometer have critical drawbacks in that they are time-consuming, labor-intensive and destructive procedures. The present study aimed to predict allicin and SSC in garlic using hyperspectral imaging in combination with variable selection algorithms and calibration models. Hyperspectral images of 100 garlic cloves were acquired that covered two spectral ranges, from which the mean spectra of each clove were extracted. The calibration models included partial least squares (PLS) and least squares-support vector machine (LS-SVM) regression, as well as different spectral pre-processing techniques, from which the highest performing spectral preprocessing technique and spectral range were selected. Then, variable selection methods, such as regression coefficients, variable importance in projection (VIP) and the successive projections algorithm (SPA), were evaluated for the selection of effective wavelengths (EWs). Furthermore, PLS and LS-SVM regression methods were applied to quantitatively predict the quality attributes of garlic using the selected EWs. Of the established models, the SPA-LS-SVM model obtained an Rpred2 of 0.90 and standard error of prediction (SEP) of 1.01% for SSC prediction, whereas the VIP-LS-SVM model produced the best result with an Rpred2 of 0.83 and SEP of 0.19 mg g -1 for allicin prediction in the range 1000-1700 nm. Furthermore, chemical images of garlic were developed using the best predictive model to facilitate visualization of the spatial distributions of allicin and SSC. The present study clearly demonstrates that hyperspectral imaging combined with an appropriate chemometrics method can potentially be employed as a fast, non-invasive method to predict the allicin and SSC in garlic. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  11. 48 CFR 715.370 - Alternative source selection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... selection procedures. 715.370 Section 715.370 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715.370 Alternative source selection procedures. The following selection procedures may be used, when...

  12. Retrograde renal hilar dissection and segmental arterial clamping: a simple modification to achieve super-selective robotic partial nephrectomy.

    PubMed

    Greene, Richard N; Sutherland, Douglas E; Tausch, Timothy J; Perez, Deo S

    2014-03-01

    Super-selective vascular control prior to robotic partial nephrectomy (also known as 'zero-ischemia') is a novel surgical technique that promises to reduce warm ischemia time. The technique has been shown to be feasible but adds substantial technical complexity and cost to the procedure. We present a simplified retrograde dissection of the renal hilum to achieve selective vascular control during robotic partial nephrectomy. Consecutive patients with stage 1 solid and complex cystic renal masses underwent robotic partial nephrectomies with selective vascular control using a modification to previously described super-selective robotic partial nephrectomy. In each case, the renal arterial branch supplying the mass and surrounding parenchyma was dissected in a retrograde fashion from the tumor. Intra-renal dissection of the interlobular artery was not performed. Intra-operative immunofluorescence was not utilized as assessment of parenchymal ischemia was documented before partial nephrectomy. Data was prospectively collected in an IRB-approved partial nephrectomy database. Operative variables between patients undergoing super-selective versus standard robotic partial nephrectomy were compared. Super-selective partial nephrectomy with retrograde hilar dissection was successfully completed in five consecutive patients. There were no complications or conversions to traditional partial nephrectomy. All were diagnosed with renal cell carcinoma and surgical margins were all negative. Estimated blood loss, warm ischemia time, operative time and length of stay were all comparable between patients undergoing super-selective and standard robotic partial nephrectomy. Retrograde hilar dissection appears to be a feasible and safe approach to super-selective partial nephrectomy without adding complex renovascular surgical techniques or cost to the procedure.

  13. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID:25265627

  14. Strategies in identifying individuals in a segregant population of common bean and implications of genotype x environment interaction in the success of selection.

    PubMed

    Mendes, M P; Ramalho, M A P; Abreu, A F B

    2012-04-10

    The objective of this study was to compare the BLUP selection method with different selection strategies in F(2:4) and assess the efficiency of this method on the early choice of the best common bean (Phaseolus vulgaris) lines. Fifty-one F(2:4) progenies were produced from a cross between the CVIII8511 x RP-26 lines. A randomized block design was used with 20 replications and one-plant field plots. Character data on plant architecture and grain yield were obtained and then the sum of the standardized variables was estimated for simultaneous selection of both traits. Analysis was carried out by mixed models (BLUP) and the least squares method to compare different selection strategies, like mass selection, stratified mass selection and between and within progeny selection. The progenies selected by BLUP were assessed in advanced generations, always selecting the greatest and smallest sum of the standardized variables. Analyses by the least squares method and BLUP procedure ranked the progenies in the same way. The coincidence of the individuals identified by BLUP and between and within progeny selection was high and of the greatest magnitude when BLUP was compared with mass selection. Although BLUP is the best estimator of genotypic value, its efficiency in the response to long term selection is not different from any of the other methods, because it is also unable to predict the future effect of the progenies x environments interaction. It was inferred that selection success will always depend on the most accurate possible progeny assessment and using alternatives to reduce the progenies x environments interaction effect.

  15. Patient selection for day case-eligible surgery: identifying those at high risk for major complications.

    PubMed

    Mathis, Michael R; Naughton, Norah N; Shanks, Amy M; Freundlich, Robert E; Pannucci, Christopher J; Chu, Yijia; Haus, Jason; Morris, Michelle; Kheterpal, Sachin

    2013-12-01

    Due to economic pressures and improvements in perioperative care, outpatient surgical procedures have become commonplace. However, risk factors for outpatient surgical morbidity and mortality remain unclear. There are no multicenter clinical data guiding patient selection for outpatient surgery. The authors hypothesize that specific risk factors increase the likelihood of day case-eligible surgical morbidity or mortality. The authors analyzed adults undergoing common day case-eligible surgical procedures by using the American College of Surgeons' National Surgical Quality Improvement Program database from 2005 to 2010. Common day case-eligible surgical procedures were identified as the most common outpatient surgical Current Procedural Terminology codes provided by Blue Cross Blue Shield of Michigan and Medicare publications. Study variables included anthropometric data and relevant medical comorbidities. The primary outcome was morbidity or mortality within 72 h. Intraoperative complications included adverse cardiovascular events; postoperative complications included surgical, anesthetic, and medical adverse events. Of 244,397 surgeries studied, 232 (0.1%) experienced early perioperative morbidity or mortality. Seven independent risk factors were identified while controlling for surgical complexity: overweight body mass index, obese body mass index, chronic obstructive pulmonary disease, history of transient ischemic attack/stroke, hypertension, previous cardiac surgical intervention, and prolonged operative time. The demonstrated low rate of perioperative morbidity and mortality confirms the safety of current day case-eligible surgeries. The authors obtained the first prospectively collected data identifying risk factors for morbidity and mortality with day case-eligible surgery. The results of the study provide new data to advance patient-selection processes for outpatient surgery.

  16. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  17. French current management and oncological results of locally recurrent rectal cancer.

    PubMed

    Denost, Q; Faucheron, J L; Lefevre, J H; Panis, Y; Cotte, E; Rouanet, P; Jafari, M; Capdepont, M; Rullier, E

    2015-12-01

    There is a significant worldwide variation in practice regarding the criteria for operative intervention and overall management in patients with locally recurrent rectal cancer (LRRC). A survival benefit has been described for patients with clear resection margins in patients undergoing surgery for LRRC which is seen as an important surgical quality indicator. A prospective French national database was established in 2008 which recorded procedures undertaken for locally recurrent rectal cancer (LRRC). Overall and Disease-Free Survival (OS, DFS) were assessed retrospectively. We report the variability and the heterogeneity of LRRC management in France as well as 5-year oncological outcomes. In this national report, 104 questionnaires were completed at 29 French surgical centres with a high variability of cases-loaded. Patients had preoperative treatment in 86% of cases. Surgical procedures included APER (36%), LAR (25%), Hartmann's procedure (21%) and pelvic exenterations (15.5%). Four patients had a low sacrectomy (S4/S5). There were no postoperative deaths and overall morbidity was 41%. R0 was achieved in 60%, R1 and R2 in 29% and 11%, respectively. R0 resection resulted in a 5-year OS of 35% compared to 12% and 0% for respectively R1 and R2 (OR = 2.04; 95% CI: 1.4-2.98; p < 0.001). OS was similar between R2 and non-resected patients (OR = 1.47; 95% CI: 0.58-3.76; p = 0.418). Our data is in accordance with the literature except the rate of extended resection procedures. This underlines the selective character of operative indications for LRRC in France as well as the care variability and the absence of optimal clinical pathway regarding these patients. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Path analysis and multi-criteria decision making: an approach for multivariate model selection and analysis in health.

    PubMed

    Vasconcelos, A G; Almeida, R M; Nobre, F F

    2001-08-01

    This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.

  19. Required number of records for ASCE/SEI 7 ground-motion scaling procedure

    USGS Publications Warehouse

    Reyes, Juan C.; Kalkan, Erol

    2011-01-01

    The procedures and criteria in 2006 IBC (International Council of Building Officials, 2006) and 2007 CBC (International Council of Building Officials, 2007) for the selection and scaling ground-motions for use in nonlinear response history analysis (RHA) of structures are based on ASCE/SEI 7 provisions (ASCE, 2005, 2010). According to ASCE/SEI 7, earthquake records should be selected from events of magnitudes, fault distance, and source mechanisms that comply with the maximum considered earthquake, and then scaled so that the average value of the 5-percent-damped response spectra for the set of scaled records is not less than the design response spectrum over the period range from 0.2Tn to 1.5Tn sec (where Tn is the fundamental vibration period of the structure). If at least seven ground-motions are analyzed, the design values of engineering demand parameters (EDPs) are taken as the average of the EDPs determined from the analyses. If fewer than seven ground-motions are analyzed, the design values of EDPs are taken as the maximum values of the EDPs. ASCE/SEI 7 requires a minimum of three ground-motions. These limits on the number of records in the ASCE/SEI 7 procedure are based on engineering experience, rather than on a comprehensive evaluation. This study statistically examines the required number of records for the ASCE/SEI 7 procedure, such that the scaled records provide accurate, efficient, and consistent estimates of" true" structural responses. Based on elastic-perfectly-plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI 7 scaling procedure is applied to 480 sets of ground-motions. The number of records in these sets varies from three to ten. The records in each set were selected either (i) randomly, (ii) considering their spectral shapes, or (iii) considering their spectral shapes and design spectral-acceleration value, A(Tn). As compared to benchmark (that is, "true") responses from unscaled records using a larger catalog of ground-motions, it is demonstrated that the ASCE/SEI 7 scaling procedure is overly conservative if fewer than seven ground-motions are employed. Utilizing seven or more randomly selected records provides a more accurate estimate of the EDPs accompanied by reduced record-to-record variability of the responses. Consistency in accuracy and efficiency is achieved only if records are selected on the basis of their spectral shape and A(Tn).

  20. Selection of specific protein binders for pre-defined targets from an optimized library of artificial helicoidal repeat proteins (alphaRep).

    PubMed

    Guellouz, Asma; Valerio-Lepiniec, Marie; Urvoas, Agathe; Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe

    2013-01-01

    We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a "filtration" procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×10(9) independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties.

  1. Selection of Specific Protein Binders for Pre-Defined Targets from an Optimized Library of Artificial Helicoidal Repeat Proteins (alphaRep)

    PubMed Central

    Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe

    2013-01-01

    We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a “filtration” procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×109 independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties. PMID:24014183

  2. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  3. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  4. Evaluation of modal pushover-based scaling of one component of ground motion: Tall buildings

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2012-01-01

    Nonlinear response history analysis (RHA) is now increasingly used for performance-based seismic design of tall buildings. Required for nonlinear RHAs is a set of ground motions selected and scaled appropriately so that analysis results would be accurate (unbiased) and efficient (having relatively small dispersion). This paper evaluates accuracy and efficiency of recently developed modal pushover–based scaling (MPS) method to scale ground motions for tall buildings. The procedure presented explicitly considers structural strength and is based on the standard intensity measure (IM) of spectral acceleration in a form convenient for evaluating existing structures or proposed designs for new structures. Based on results presented for two actual buildings (19 and 52 stories, respectively), it is demonstrated that the MPS procedure provided a highly accurate estimate of the engineering demand parameters (EDPs), accompanied by significantly reduced record-to-record variability of the responses. In addition, the MPS procedure is shown to be superior to the scaling procedure specified in the ASCE/SEI 7-05 document.

  5. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  6. Expert systems for fault diagnosis in nuclear reactor control

    NASA Astrophysics Data System (ADS)

    Jalel, N. A.; Nicholson, H.

    1990-11-01

    An expert system for accident analysis and fault diagnosis for the Loss Of Fluid Test (LOFT) reactor, a small scale pressurized water reactor, was developed for a personal computer. The knowledge of the system is presented using a production rule approach with a backward chaining inference engine. The data base of the system includes simulated dependent state variables of the LOFT reactor model. Another system is designed to assist the operator in choosing the appropriate cooling mode and to diagnose the fault in the selected cooling system. The response tree, which is used to provide the link between a list of very specific accident sequences and a set of generic emergency procedures which help the operator in monitoring system status, and to differentiate between different accident sequences and select the correct procedures, is used to build the system knowledge base. Both systems are written in TURBO PROLOG language and can be run on an IBM PC compatible with 640k RAM, 40 Mbyte hard disk and color graphics.

  7. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  8. Analysis of spatio-temporal variability of C-factor derived from remote sensing data

    NASA Astrophysics Data System (ADS)

    Pechanec, Vilem; Benc, Antonin; Purkyt, Jan; Cudlin, Pavel

    2016-04-01

    In some risk areas water erosion as the present task has got the strong influence on agriculture and can threaten inhabitants. In our country combination of USLE and RUSLE models has been used for water erosion assessment (Krása et al., 2013). Role of vegetation cover is characterized by the help of vegetation protection factor, so-called C- factor. Value of C-factor is given by the ratio of washing-off on a plot with arable crops to standard plot which is kept as fallow regularly spud after any rain (Janeček et al., 2012). Under conditions we cannot identify crop structure and its turn, determination of C-factor can be problem in large areas. In such case we only determine C-factor according to the average crop representation. New technologies open possibilities for acceleration and specification of the approach. Present-day approach for the C-factor determination is based on the analysis of multispectral image data. Red and infrared spectrum is extracted and these parts of image are used for computation of vegetation index series (NDVI, TSAVI). Acquired values for fractional time sections (during vegetation period) are averaged out. At the same time values of vegetation indices for a forest and cleared area are determined. Also regressive coefficients are computed. Final calculation is done by the help of regressive equations expressing relation between values of NDVI and C-factor (De Jong, 1994; Van der Knijff, 1999; Karaburun, 2010). Up-to-date land use layer is used for the determination of erosion threatened areas on the base of selection of individual landscape segments of erosion susceptible categories of land use. By means of Landsat 7 data C-factor has been determined for the whole area of the Czech Republic in every month of the year of 2014. At the model area in a small watershed C-factor has been determined by the conventional (tabular) procedure. Analysis was focused on: i) variability assessment of C-factor values while using the conventional procedure and remote sensing, ii) variability assessment of values in different months for selected crops - time variability, growth dynamics and vegetation cover have been taken into account; iii) evaluation of space variability of C-factor values for selected crops - variability of natural conditions, markedly influencing vegetation fitness, has been taken into account. Described methods can be used for the planning of agricultural activities and for the proposing of full-scale land replotting. Presently these activities have been financially supported by the European Structural and Investment Funds.

  9. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures.

    PubMed

    Lievens, Filip; Sackett, Paul R

    2017-01-01

    Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  11. Physical-property, water-quality, plankton, and bottom-material data for Devils Lake and East Devils Lake, North Dakota, September 1988 through October 1990

    USGS Publications Warehouse

    Sando, Steven K.; Sether, Bradley A.

    1993-01-01

    Physical-properties were measured and water-quality, plankton, and bottom-material samples were collected at 10 sites in Devils Lake and East Devils Lake during September 1988 through October 1990 to study water-quality variability and water-quality and plankton relations in Devils Lake and East Devils Lake. Physical properties measured include specific conductance, pH, water temperature, dissolved-oxygen concentration, water transparency, and light transmission. Water-quality samples were analyzed for concentrations of major ions, selected nutrients, and selected trace elements. Plankton samples were examined for identification and enumeration of phytoplankton and zooplankton species, and bottom-material samples were analyzed for concentrations of selected nutrients. Data-collection procedures are discussed and the data are presented in tabular form.

  12. South African medical schools: Current state of selection criteria and medical students' demographic profile.

    PubMed

    van der Merwe, L J; van Zyl, G J; St Clair Gibson, A; Viljoen, M; Iputo, J E; Mammen, M; Chitha, W; Perez, A M; Hartman, N; Fonn, S; Green-Thompson, L; Ayo-Ysuf, O A; Botha, G C; Manning, D; Botha, S J; Hift, R; Retief, P; van Heerden, B B; Volmink, J

    2015-12-16

    Selection of medical students at South African (SA) medical schools must promote equitable and fair access to students from all population groups, while ensuring optimal student throughput and success, and training future healthcare practitioners who will fulfil the needs of the local society. In keeping with international practices, a variety of academic and non-academic measures are used to select applicants for medical training programmes in SA medical schools. To provide an overview of the selection procedures used by all eight medical schools in SA, and the student demographics (race and gender) at these medical schools, and to determine to what extent collective practices are achieving the goals of student diversity and inclusivity. A retrospective, quantitative, descriptive study design was used. All eight medical schools in SA provided information regarding selection criteria, selection procedures, and student demographics (race and gender). Descriptive analysis of data was done by calculating frequencies and percentages of the variables measured. Medical schools in SA make use of academic and non-academic criteria in their selection processes. The latter include indices of socioeconomic disadvantage. Most undergraduate medical students in SA are black (38.7%), followed by white (33.0%), coloured (13.4%) and Indian/Asian (13.6%). The majority of students are female (62.2%). The number of black students is still proportionately lower than in the general population, while other groups are overrepresented. Selection policies for undergraduate medical programmes aimed at redress should be continued and further refined, along with the provision of support to ensure student success.

  13. Selecting minimum dataset soil variables using PLSR as a regressive multivariate method

    NASA Astrophysics Data System (ADS)

    Stellacci, Anna Maria; Armenise, Elena; Castellini, Mirko; Rossi, Roberta; Vitti, Carolina; Leogrande, Rita; De Benedetto, Daniela; Ferrara, Rossana M.; Vivaldi, Gaetano A.

    2017-04-01

    Long-term field experiments and science-based tools that characterize soil status (namely the soil quality indices, SQIs) assume a strategic role in assessing the effect of agronomic techniques and thus in improving soil management especially in marginal environments. Selecting key soil variables able to best represent soil status is a critical step for the calculation of SQIs. Current studies show the effectiveness of statistical methods for variable selection to extract relevant information deriving from multivariate datasets. Principal component analysis (PCA) has been mainly used, however supervised multivariate methods and regressive techniques are progressively being evaluated (Armenise et al., 2013; de Paul Obade et al., 2016; Pulido Moncada et al., 2014). The present study explores the effectiveness of partial least square regression (PLSR) in selecting critical soil variables, using a dataset comparing conventional tillage and sod-seeding on durum wheat. The results were compared to those obtained using PCA and stepwise discriminant analysis (SDA). The soil data derived from a long-term field experiment in Southern Italy. On samples collected in April 2015, the following set of variables was quantified: (i) chemical: total organic carbon and nitrogen (TOC and TN), alkali-extractable C (TEC and humic substances - HA-FA), water extractable N and organic C (WEN and WEOC), Olsen extractable P, exchangeable cations, pH and EC; (ii) physical: texture, dry bulk density (BD), macroporosity (Pmac), air capacity (AC), and relative field capacity (RFC); (iii) biological: carbon of the microbial biomass quantified with the fumigation-extraction method. PCA and SDA were previously applied to the multivariate dataset (Stellacci et al., 2016). PLSR was carried out on mean centered and variance scaled data of predictors (soil variables) and response (wheat yield) variables using the PLS procedure of SAS/STAT. In addition, variable importance for projection (VIP) statistics was used to quantitatively assess the predictors most relevant for response variable estimation and then for variable selection (Andersen and Bro, 2010). PCA and SDA returned TOC and RFC as influential variables both on the set of chemical and physical data analyzed separately as well as on the whole dataset (Stellacci et al., 2016). Highly weighted variables in PCA were also TEC, followed by K, and AC, followed by Pmac and BD, in the first PC (41.2% of total variance); Olsen P and HA-FA in the second PC (12.6%), Ca in the third (10.6%) component. Variables enabling maximum discrimination among treatments for SDA were WEOC, on the whole dataset, humic substances, followed by Olsen P, EC and clay, in the separate data analyses. The highest PLS-VIP statistics were recorded for Olsen P and Pmac, followed by TOC, TEC, pH and Mg for chemical variables and clay, RFC and AC for the physical variables. Results show that different methods may provide different ranking of the selected variables and the presence of a response variable, in regressive techniques, may affect variable selection. Further investigation with different response variables and with multi-year datasets would allow to better define advantages and limits of single or combined approaches. Acknowledgment The work was supported by the projects "BIOTILLAGE, approcci innovative per il miglioramento delle performances ambientali e produttive dei sistemi cerealicoli no-tillage", financed by PSR-Basilicata 2007-2013, and "DESERT, Low-cost water desalination and sensor technology compact module" financed by ERANET-WATERWORKS 2014. References Andersen C.M. and Bro R., 2010. Variable selection in regression - a tutorial. Journal of Chemometrics, 24 728-737. Armenise et al., 2013. Developing a soil quality index to compare soil fitness for agricultural use under different managements in the mediterranean environment. Soil and Tillage Research, 130:91-98. de Paul Obade et al., 2016. A standardized soil quality index for diverse field conditions. Sci. Total Env. 541:424-434. Pulido Moncada et al., 2014. Data-driven analysis of soil quality indicators using limited data. Geoderma, 235:271-278. Stellacci et al., 2016. Comparison of different multivariate methods to select key soil variables for soil quality indices computation. XLV Congress of the Italian Society of Agronomy (SIA), Sassari, 20-22 September 2016.

  14. Documentation for assessment of modal pushover-based scaling procedure for nonlinear response history analysis of "ordinary standard" bridges

    USGS Publications Warehouse

    Kalkan, Erol; Kwong, Neal S.

    2010-01-01

    The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground-motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case for the central United States), or when high-intensity records are needed (as is the case for San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure recently was developed to determine scale factors for a small number of records, such that the scaled records provide accurate and efficient estimates of 'true' median structural responses. The adjective 'accurate' refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective 'efficient' refers to the record-to-record variability of responses. Herein, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing 'ordinary standard' bridges typical of reinforced-concrete bridge construction in California. These bridges are the single-bent overpass, multi span bridge, curved-bridge, and skew-bridge. As compared to benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the responses. Thus, the MPS procedure is a useful tool for scaling ground motions as input to nonlinear RHAs of 'ordinary standard' bridges.

  15. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Construction of quality-assured infant feeding process of care data repositories: Construction of the perinatal repository (Part 2).

    PubMed

    García-de-León-Chocano, Ricardo; Muñoz-Soler, Verónica; Sáez, Carlos; García-de-León-González, Ricardo; García-Gómez, Juan M

    2016-04-01

    This is the second in a series of two papers regarding the construction of data quality (DQ) assured repositories, based on population data from Electronic Health Records (EHR), for the reuse of information on infant feeding from birth until the age of two. This second paper describes the application of the computational process of constructing the first quality-assured repository for the reuse of information on infant feeding in the perinatal period, with the aim of studying relevant questions from the Baby Friendly Hospital Initiative (BFHI) and monitoring its deployment in our hospital. The construction of the repository was carried out using 13 semi-automated procedures to assess, recover or discard clinical data. The initial information consisted of perinatal forms from EHR related to 2048 births (Facts of Study, FoS) between 2009 and 2011, with a total of 433,308 observations of 223 variables. DQ was measured before and after the procedures using metrics related to eight quality dimensions: predictive value, correctness, duplication, consistency, completeness, contextualization, temporal-stability, and spatial-stability. Once the predictive variables were selected and DQ was assured, the final repository consisted of 1925 births, 107,529 observations and 73 quality-assured variables. The amount of discarded observations mainly corresponds to observations of non-predictive variables (52.90%) and the impact of the de-duplication process (20.58%) with respect to the total input data. Seven out of thirteen procedures achieved 100% of valid births, observations and variables. Moreover, 89% of births and ~98% of observations were consistent according to the experts׳ criteria. A multidisciplinary approach along with the quantification of DQ has allowed us to construct the first repository about infant feeding in the perinatal period based on EHR population data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Toothbrushing procedure in schoolchildren with no previous formal instruction: variables associated to dental biofilm removal.

    PubMed

    Rossi, Glenda N; Sorazabal, Ana L; Salgado, Pablo A; Squassi, Aldo F; Klemonskis, Graciela L

    2016-04-01

    The aim of this study was to establish the association between features regarding brushing procedure performed by schoolchildren without previous formal training and the effectiveness of biofilm removal. Out of a population of 8900 6- and 7-year-old schoolchildren in Buenos Aires City, 600 children were selected from schools located in homogeneous risk areas. Informed consent was requested from parents or guardians and formal assent was obtained from children themselves. The final sample consisted of 316 subjects. The following tooth brushing variables were analyzed: toothbrush-gripping, orientation of active part of bristles with respect to the tooth, type of movement applied, brushing both jaws together or separately, including all 6 sextants and duration of brushing. The level of dental biofilm after brushing was determined by O'Leary's index, acceptable cut-off point = 20%. Four calibrated dentists performed observations and clinical examinations. Frequency distribution, central tendency and dispersion measures were calculated. Cluster analyses were performed; proportions of variables for each cluster were compared with Bonferroni's correction and OR was obtained. The most frequent categories were: palm gripping (71.51%); perpendicular orientation (85.8%); horizontal movement (95.6%); separate addressing of jaws (68%) and inclusion of all 6 sextants (50.6%). Mean duration of brushing was 48.78 ± 27.36 seconds. 42.7% of the children achieved an acceptable biofilm level. The cluster with the highest proportion of subjects with acceptable post-brushing biofilm levels (p<0.05) differed significantly from the rest for the variable "inclusion of all 6 sextants in brushing procedure". OR was 2.538 (CI 95% 1.603 - 4.017). Inclusion of all six sextants could be a determinant variable for the removal of biofilm by brushing in schoolchildren, and should be systematized as a component in oral hygiene education. Sociedad Argentina de Investigación Odontológica.

  18. A vertical-energy-thresholding procedure for data reduction with multiple complex curves.

    PubMed

    Jung, Uk; Jeong, Myong K; Lu, Jye-Chyi

    2006-10-01

    Due to the development of sensing and computer technology, measurements of many process variables are available in current manufacturing processes. It is very challenging, however, to process a large amount of information in a limited time in order to make decisions about the health of the processes and products. This paper develops a "preprocessing" procedure for multiple sets of complicated functional data in order to reduce the data size for supporting timely decision analyses. The data type studied has been used for fault detection, root-cause analysis, and quality improvement in such engineering applications as automobile and semiconductor manufacturing and nanomachining processes. The proposed vertical-energy-thresholding (VET) procedure balances the reconstruction error against data-reduction efficiency so that it is effective in capturing key patterns in the multiple data signals. The selected wavelet coefficients are treated as the "reduced-size" data in subsequent analyses for decision making. This enhances the ability of the existing statistical and machine-learning procedures to handle high-dimensional functional data. A few real-life examples demonstrate the effectiveness of our proposed procedure compared to several ad hoc techniques extended from single-curve-based data modeling and denoising procedures.

  19. Applying a particle filtering technique for canola crop growth stage estimation in Canada

    NASA Astrophysics Data System (ADS)

    Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi

    2017-10-01

    Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.

  20. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    PubMed

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  1. Use of EPANET solver to manage water distribution in Smart City

    NASA Astrophysics Data System (ADS)

    Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.

    2018-02-01

    Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.

  2. Evaluation of redundancy analysis to identify signatures of local adaptation.

    PubMed

    Capblancq, Thibaut; Luu, Keurcien; Blum, Michael G B; Bazin, Eric

    2018-05-26

    Ordination is a common tool in ecology that aims at representing complex biological information in a reduced space. In landscape genetics, ordination methods such as principal component analysis (PCA) have been used to detect adaptive variation based on genomic data. Taking advantage of environmental data in addition to genotype data, redundancy analysis (RDA) is another ordination approach that is useful to detect adaptive variation. This paper aims at proposing a test statistic based on RDA to search for loci under selection. We compare redundancy analysis to pcadapt, which is a nonconstrained ordination method, and to a latent factor mixed model (LFMM), which is a univariate genotype-environment association method. Individual-based simulations identify evolutionary scenarios where RDA genome scans have a greater statistical power than genome scans based on PCA. By constraining the analysis with environmental variables, RDA performs better than PCA in identifying adaptive variation when selection gradients are weakly correlated with population structure. Additionally, we show that if RDA and LFMM have a similar power to identify genetic markers associated with environmental variables, the RDA-based procedure has the advantage to identify the main selective gradients as a combination of environmental variables. To give a concrete illustration of RDA in population genomics, we apply this method to the detection of outliers and selective gradients on an SNP data set of Populus trichocarpa (Geraldes et al., 2013). The RDA-based approach identifies the main selective gradient contrasting southern and coastal populations to northern and continental populations in the northwestern American coast. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  3. Interpretation of tropospheric ozone variability in data with different vertical and temporal resolution

    NASA Astrophysics Data System (ADS)

    Petropavlovskikh, I. V.; Disterhoft, P.; Johnson, B. J.; Rieder, H. E.; Manney, G. L.; Daffer, W.

    2012-12-01

    This work attributes tropospheric ozone variability derived from the ground-based Dobson and Brewer Umkehr measurements and from ozone sonde data to local sources and transport. It assesses capability and limitations in both types of measurements that are often used to analyze long- and short-term variability in tropospheric ozone time series. We will address the natural and instrument-related contribution to the variability found in both Umkehr and sonde data. Validation of Umkehr methods is often done by intercomparisons against independent ozone measuring techniques such as ozone sounding. We will use ozone-sounding in its original and AK-smoothed vertical profiles for assessment of ozone inter-annual variability over Boulder, CO. We will discuss possible reasons for differences between different ozone measuring techniques and its effects on the derived ozone trends. Next to standard evaluation techniques we utilize a STL-decomposition method to address temporal variability and trends in the Boulder Umkehr data. Further, we apply a statistical modeling approach to the ozone data set to attribute ozone variability to individual driving forces associated with natural and anthropogenic causes. To this aim we follow earlier work applying a backward selection method (i.e., a stepwise elimination procedure out of a set of total 44 explanatory variables) to determine those explanatory variables which contribute most significantly to the observed variability. We will present also some results associated with completeness (sampling rate) of the existing data sets. We will also use MERRA (Modern-Era Retrospective analysis for Research and Applications) re-analysis results selected for Boulder location as a transfer function in understanding of the effects that the temporal sampling and vertical resolution bring into trend and ozone variability analysis. Analyzing intra-annual variability in ozone measurements over Boulder, CO, in relation to the upper tropospheric subtropical and polar jets, we will address the stratospheric and tropospheric intrusions in the middle latitude troposphere ozone field.

  4. Characterizing hospital inpatients: the importance of demographics and attitudes.

    PubMed

    Danko, W D; Janakiramanan, B; Stanley, T J

    1988-01-01

    To compete effectively, hospital administrators must understand inpatients who are involved in hospital-choice decisions more clearly. To this end, a methodology is presented to measure and assess the importance of inpatients' personal attributes in predicting hospital selection. Empirical results show that demographic characteristics are poor--but attitudes are useful--segmentation variables that delineate differences between two particular hospitals' inpatients. More generally, the survey method and statistical procedures outlined are applicable (with slight modification) to markets with a greater number of competitors.

  5. Queue position in the endoscopic schedule impacts effectiveness of colonoscopy.

    PubMed

    Lee, Alexander; Iskander, John M; Gupta, Nitin; Borg, Brian B; Zuckerman, Gary; Banerjee, Bhaskar; Gyawali, C Prakash

    2011-08-01

    Endoscopist fatigue potentially impacts colonoscopy. Fatigue is difficult to quantitate, but polyp detection rates between non-fatigued and fatigued time periods could represent a surrogate marker. We assessed whether timing variables impacted polyp detection rates at a busy tertiary care endoscopy suite. Consecutive patients undergoing colonoscopy were retrospectively identified. Indications, clinical demographics, pre-procedural, and procedural variables were extracted from chart review; colonoscopy findings were determined from the procedure reports. Three separate timing variables were assessed as surrogate markers for endoscopist fatigue: morning vs. afternoon procedures, start times throughout the day, and queue position, a unique variable that takes into account the number of procedures performed before the colonoscopy of interest. Univariate and multivariate analyses were performed to determine whether timing variables and other clinical, pre-procedural, and procedural variables predicted polyp detection. During the 4-month study period, 1,083 outpatient colonoscopy procedures (57.5±0.5 years, 59.5% female) were identified, performed by 28 endoscopists (mean 38.7 procedures/endoscopist), with a mean polyp detection rate of 0.851/colonoscopy. At least, one adenoma was detected in 297 procedures (27.4%). A 12.4% reduction in mean detected polyps was detected between morning and afternoon procedures (0.90±0.06 vs. 0.76±0.06, P=0.15). Using start time on a continuous scale, however, each elapsed hour in the day was associated with a 4.6% reduction in polyp detection (P=0.005). When queue position was assessed, a 5.4% reduction in polyp detection was noted with each increase in queue position (P=0.016). These results remained significant when controlled for each individual endoscopist. Polyp detection rates decline as time passes during an endoscopist's schedule, potentially from endoscopist fatigue. Queue position may be a novel surrogate measure for operator fatigue.

  6. 29 CFR 1607.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... selection procedures and discrimination. 1607.3 Section 1607.3 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.3 Discrimination defined: Relationship between use of selection procedures and...

  7. The Heterogeneity in Retrieved Relations between the Personality Trait ‘Harm Avoidance’ and Gray Matter Volumes Due to Variations in the VBM and ROI Labeling Processing Settings

    PubMed Central

    Van Schuerbeek, Peter; Baeken, Chris; De Mey, Johan

    2016-01-01

    Concerns are raising about the large variability in reported correlations between gray matter morphology and affective personality traits as ‘Harm Avoidance’ (HA). A recent review study (Mincic 2015) stipulated that this variability could come from methodological differences between studies. In order to achieve more robust results by standardizing the data processing procedure, as a first step, we repeatedly analyzed data from healthy females while changing the processing settings (voxel-based morphology (VBM) or region-of-interest (ROI) labeling, smoothing filter width, nuisance parameters included in the regression model, brain atlas and multiple comparisons correction method). The heterogeneity in the obtained results clearly illustrate the dependency of the study outcome to the opted analysis settings. Based on our results and the existing literature, we recommended the use of VBM over ROI labeling for whole brain analyses with a small or intermediate smoothing filter (5-8mm) and a model variable selection step included in the processing procedure. Additionally, it is recommended that ROI labeling should only be used in combination with a clear hypothesis and that authors are encouraged to report their results uncorrected for multiple comparisons as supplementary material to aid review studies. PMID:27096608

  8. Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.

    PubMed

    McShane, L M; Clark, L C; Combs, G F; Turnbull, B W

    1991-06-01

    Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.

  9. Age- and sex-specific reference values of a test of neck muscle endurance.

    PubMed

    Peolsson, Anneli; Almkvist, Cecilia; Dahlberg, Camilla; Lindqvist, Sara; Pettersson, Susanne

    2007-01-01

    This study evaluates age- and sex-specific reference values for neck muscle endurance (NME). In this cross-sectional study, 116 randomly selected, healthy volunteers (ages 25-64 years) stratified according to age and gender participated. Dorsal and ventral NME was measured in seconds until exhaustion in a laying-down position. A weight of 4 kg for men or 2 kg for women was used in the dorsal procedure. The ventral procedure was performed without external load. Background and physical activity data were obtained and used in the analysis of NME performance. Mean values for dorsal and ventral NME were about 7 and 2.5 minutes for men and 8.5 and 0.5 minutes for women, respectively. The cutoff values for subnormal dorsal and ventral NME were 157 and 56 seconds for men and 173 and 23 seconds for women, respectively. Women's NME was 122% of men's NME in the dorsal (P = .17) and 24% of men's NME in the ventral (P < .0001) procedure. There were no significant differences among age groups. In multiple regression analysis, physical activity explained 4% of variability in the performance of the dorsal NME; and sex explained 37% of the variability in the performance of ventral NME. The reference values and the cutoff points obtained could be used in clinical practice to identify patients with a subnormal NME. Sex is an important consideration when using both the test procedure and the reference values.

  10. The no-show patient in the model family practice unit.

    PubMed

    Dervin, J V; Stone, D L; Beck, C H

    1978-12-01

    Appointment breaking by patients causes problems for the physician's office. Patients who neither keep nor cancel their appointments are often referred to as "no shows." Twenty variables were identified as potential predictors of no-show behavior. These predictors were applied to 291 Family Practice Center patients during a one-month study in April 1977. A discriminant function and multiple regression procedure were utilized ascertain the predictability of the selected variables. Predictive accuracy of the variables was 67.4 percent compared to the presently utilized constant predictor technique, which is 73 percent accurate. Modification of appointment schedules based upon utilization of the variables studies as predictors of show/no-show behavior does not appear to be an effective strategy in the Family Practice Center of the Community Hospital of Sonoma County, Santa Rosa, due to the high proportion of patients who do, in fact, show. In clinics with lower show rates, the technique may prove to be an effective strategy.

  11. Developing a Model for Forecasting Road Traffic Accident (RTA) Fatalities in Yemen

    NASA Astrophysics Data System (ADS)

    Karim, Fareed M. A.; Abdo Saleh, Ali; Taijoobux, Aref; Ševrović, Marko

    2017-12-01

    The aim of this paper is to develop a model for forecasting RTA fatalities in Yemen. The yearly fatalities was modeled as the dependent variable, while the number of independent variables included the population, number of vehicles, GNP, GDP and Real GDP per capita. It was determined that all these variables are highly correlated with the correlation coefficient (r ≈ 0.9); in order to avoid multicollinearity in the model, a single variable with the highest r value was selected (real GDP per capita). A simple regression model was developed; the model was very good (R2=0.916); however, the residuals were serially correlated. The Prais-Winsten procedure was used to overcome this violation of the regression assumption. The data for a 20-year period from 1991-2010 were analyzed to build the model; the model was validated by using data for the years 2011-2013; the historical fit for the period 1991 - 2011 was very good. Also, the validation for 2011-2013 proved accurate.

  12. Memory bias for threatening information in anxiety and anxiety disorders: a meta-analytic review.

    PubMed

    Mitte, Kristin

    2008-11-01

    Although some theories suggest that anxious individuals selectively remember threatening stimuli, findings remain contradictory despite a considerable amount of research. A quantitative integration of 165 studies with 9,046 participants (clinical and nonclinical samples) examined whether a memory bias exists and which moderator variables influence its magnitude. Implicit memory bias was investigated in lexical decision/stimulus identification and word-stem completion paradigms; explicit memory bias was investigated in recognition and recall paradigms. Overall, effect sizes showed no significant impact of anxiety on implicit memory and recognition. Analyses indicated a memory bias for recall, whose magnitude depended on experimental study procedures like the encoding procedure or retention interval. Anxiety influenced recollection of previous experiences; anxious individuals favored threat-related information. Across all paradigms, clinical status was not significantly linked to effect sizes, indicating no qualitative difference in information processing between anxiety patients and high-anxious persons. The large discrepancy between study effects in recall and recognition indicates that future research is needed to identify moderator variables for avoidant and preferred remembering.

  13. A pulse-forming network for particle path visualization. [at Ames Aeromechanics Water Tunnel Facility

    NASA Technical Reports Server (NTRS)

    Mcalister, K. W.

    1981-01-01

    A procedure is described for visualizing nonsteady fluid flow patterns over a wide velocity range using discrete nonluminous particles. The paramount element responsible for this capability is a pulse-forming network with variable inductance that is used to modulate the discharge of a fixed amount of electrical energy through a xenon flashtube. The selectable duration of the resultant light emission functions as a variable shutter so that particle path images of constant length can be recorded. The particles employed as flow markers are hydrogen bubbles that are generated by electrolysis in a water tunnel. Data are presented which document the characteristics of the electrical circuit and establish the relation of particle velocity to both section inductance and film exposure.

  14. Game Indicators Determining Sports Performance in the NBA

    PubMed Central

    Mikołajec, Kazimierz; Maszczyk, Adam; Zając, Tomasz

    The main goal of the present study was to identify basketball game performance indicators which best determine sports level in the National Basketball Association (NBA) league. The research material consisted of all NBA game statistics at the turn of eight seasons (2003–11) and included 52 performance variables. Through detailed analysis the variables with high influence on game effectiveness were selected for final procedures. It has been proven that a limited number of factors, mostly offensive, determines sports performance in the NBA. The most critical indicators are: Win%, Offensive EFF, 3rd Quarter PPG, Win% CG, Avg Fauls and Avg Steals. In practical applications these results connected with top teams and elite players may help coaches to design better training programs. PMID:24146715

  15. Game Indicators Determining Sports Performance in the NBA.

    PubMed

    Mikołajec, Kazimierz; Maszczyk, Adam; Zając, Tomasz

    2013-01-01

    The main goal of the present study was to identify basketball game performance indicators which best determine sports level in the National Basketball Association (NBA) league. The research material consisted of all NBA game statistics at the turn of eight seasons (2003-11) and included 52 performance variables. Through detailed analysis the variables with high influence on game effectiveness were selected for final procedures. It has been proven that a limited number of factors, mostly offensive, determines sports performance in the NBA. The most critical indicators are: Win%, Offensive EFF, 3rd Quarter PPG, Win% CG, Avg Fauls and Avg Steals. In practical applications these results connected with top teams and elite players may help coaches to design better training programs.

  16. A bootstrap based Neyman-Pearson test for identifying variable importance.

    PubMed

    Ditzler, Gregory; Polikar, Robi; Rosen, Gail

    2015-04-01

    Selection of most informative features that leads to a small loss on future data are arguably one of the most important steps in classification, data analysis and model selection. Several feature selection (FS) algorithms are available; however, due to noise present in any data set, FS algorithms are typically accompanied by an appropriate cross-validation scheme. In this brief, we propose a statistical hypothesis test derived from the Neyman-Pearson lemma for determining if a feature is statistically relevant. The proposed approach can be applied as a wrapper to any FS algorithm, regardless of the FS criteria used by that algorithm, to determine whether a feature belongs in the relevant set. Perhaps more importantly, this procedure efficiently determines the number of relevant features given an initial starting point. We provide freely available software implementations of the proposed methodology.

  17. Resolving combinatorial ambiguities in dilepton t t¯ event topologies with constrained M2 variables

    NASA Astrophysics Data System (ADS)

    Debnath, Dipsikha; Kim, Doojin; Kim, Jeong Han; Kong, Kyoungchul; Matchev, Konstantin T.

    2017-10-01

    We advocate the use of on-shell constrained M2 variables in order to mitigate the combinatorial problem in supersymmetry-like events with two invisible particles at the LHC. We show that in comparison to other approaches in the literature, the constrained M2 variables provide superior ansätze for the unmeasured invisible momenta and therefore can be usefully applied to discriminate combinatorial ambiguities. We illustrate our procedure with the example of dilepton t t ¯ events. We critically review the existing methods based on the Cambridge MT 2 variable and MAOS reconstruction of invisible momenta, and show that their algorithm can be simplified without loss of sensitivity, due to a perfect correlation between events with complex solutions for the invisible momenta and events exhibiting a kinematic endpoint violation. Then we demonstrate that the efficiency for selecting the correct partition is further improved by utilizing the M2 variables instead. Finally, we also consider the general case when the underlying mass spectrum is unknown, and no kinematic endpoint information is available.

  18. Does my high blood pressure improve your survival? Overall and subgroup learning curves in health.

    PubMed

    Van Gestel, Raf; Müller, Tobias; Bosmans, Johan

    2017-09-01

    Learning curves in health are of interest for a wide range of medical disciplines, healthcare providers, and policy makers. In this paper, we distinguish between three types of learning when identifying overall learning curves: economies of scale, learning from cumulative experience, and human capital depreciation. In addition, we approach the question of how treating more patients with specific characteristics predicts provider performance. To soften collinearity problems, we explore the use of least absolute shrinkage and selection operator regression as a variable selection method and Theil-Goldberger mixed estimation to augment the available information. We use data from the Belgian Transcatheter Aorta Valve Implantation (TAVI) registry, containing information on the first 860 TAVI procedures in Belgium. We find that treating an additional TAVI patient is associated with an increase in the probability of 2-year survival by about 0.16%-points. For adverse events like renal failure and stroke, we find that an extra day between procedures is associated with an increase in the probability for these events by 0.12%-points and 0.07%-points, respectively. Furthermore, we find evidence for positive learning effects from physicians' experience with defibrillation, treating patients with hypertension, and the use of certain types of replacement valves during the TAVI procedure. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Adaptive pre-specification in randomized trials with and without pair-matching.

    PubMed

    Balzer, Laura B; van der Laan, Mark J; Petersen, Maya L

    2016-11-10

    In randomized trials, adjustment for measured covariates during the analysis can reduce variance and increase power. To avoid misleading inference, the analysis plan must be pre-specified. However, it is often unclear a priori which baseline covariates (if any) should be adjusted for in the analysis. Consider, for example, the Sustainable East Africa Research in Community Health (SEARCH) trial for HIV prevention and treatment. There are 16 matched pairs of communities and many potential adjustment variables, including region, HIV prevalence, male circumcision coverage, and measures of community-level viral load. In this paper, we propose a rigorous procedure to data-adaptively select the adjustment set, which maximizes the efficiency of the analysis. Specifically, we use cross-validation to select from a pre-specified library the candidate targeted maximum likelihood estimator (TMLE) that minimizes the estimated variance. For further gains in precision, we also propose a collaborative procedure for estimating the known exposure mechanism. Our small sample simulations demonstrate the promise of the methodology to maximize study power, while maintaining nominal confidence interval coverage. We show how our procedure can be tailored to the scientific question (intervention effect for the study sample vs. for the target population) and study design (pair-matched or not). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Forecasting volatility with neural regression: a contribution to model adequacy.

    PubMed

    Refenes, A N; Holt, W T

    2001-01-01

    Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.

  1. Understanding data requirements of retrospective studies.

    PubMed

    Shenvi, Edna C; Meeker, Daniella; Boxwala, Aziz A

    2015-01-01

    Usage of data from electronic health records (EHRs) in clinical research is increasing, but there is little empirical knowledge of the data needed to support multiple types of research these sources support. This study seeks to characterize the types and patterns of data usage from EHRs for clinical research. We analyzed the data requirements of over 100 retrospective studies by mapping the selection criteria and study variables to data elements of two standard data dictionaries, one from the healthcare domain and the other from the clinical research domain. We also contacted study authors to validate our results. The majority of variables mapped to one or to both of the two dictionaries. Studies used an average of 4.46 (range 1-12) data element types in the selection criteria and 6.44 (range 1-15) in the study variables. The most frequently used items (e.g., procedure, condition, medication) are often available in coded form in EHRs. Study criteria were frequently complex, with 49 of 104 studies involving relationships between data elements and 22 of the studies using aggregate operations for data variables. Author responses supported these findings. The high proportion of mapped data elements demonstrates the significant potential for clinical data warehousing to facilitate clinical research. Unmapped data elements illustrate the difficulty in developing a complete data dictionary. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Variability in Accreditation Council for Graduate Medical Education Resident Case Log System practices among orthopaedic surgery residents.

    PubMed

    Salazar, Dane; Schiff, Adam; Mitchell, Erika; Hopkinson, William

    2014-02-05

    The Accreditation Council for Graduate Medical Education (ACGME) Resident Case Log System is designed to be a reflection of residents' operative volume and an objective measure of their surgical experience. All operative procedures and manipulations in the operating room, Emergency Department, and outpatient clinic are to be logged into the Resident Case Log System. Discrepancies in the log volumes between residents and residency programs often prompt scrutiny. However, it remains unclear if such disparities truly represent differences in operative experiences or if they are reflections of inconsistent logging practices. The purpose of this study was to investigate individual recording practices among orthopaedic surgery residents prior to August 1, 2011. Orthopaedic surgery residents received a questionnaire on case log practices that was distributed through the Council of Orthopaedic Residency Directors list server. Respondents were asked to respond anonymously about recording practices in different clinical settings as well as types of cases routinely logged. Hypothetical scenarios of common orthopaedic procedures were presented to investigate the differences in the Current Procedural Terminology codes utilized. Two hundred and ninety-eight orthopaedic surgery residents completed the questionnaire; 37% were fifth-year residents, 22% were fourth-year residents, 18% were third-year residents, 15% were second-year residents, and 8% were first-year residents. Fifty-six percent of respondents reported routinely logging procedures performed in the Emergency Department or urgent care setting. Twenty-two percent of participants routinely logged procedures in the clinic or outpatient setting, 20% logged joint injections, and only 13% logged casts or splints applied in the office setting. There was substantial variability in the Current Procedural Terminology codes selected for the seven clinical scenarios. There has been a lack of standardization in case-logging practices among orthopaedic surgery residents prior to August 1, 2011. ACGME case log data prior to this date may not be a reliable measure of residents' procedural experience.

  3. Prioritizing individual genetic variants after kernel machine testing using variable selection.

    PubMed

    He, Qianchuan; Cai, Tianxi; Liu, Yang; Zhao, Ni; Harmon, Quaker E; Almli, Lynn M; Binder, Elisabeth B; Engel, Stephanie M; Ressler, Kerry J; Conneely, Karen N; Lin, Xihong; Wu, Michael C

    2016-12-01

    Kernel machine learning methods, such as the SNP-set kernel association test (SKAT), have been widely used to test associations between traits and genetic polymorphisms. In contrast to traditional single-SNP analysis methods, these methods are designed to examine the joint effect of a set of related SNPs (such as a group of SNPs within a gene or a pathway) and are able to identify sets of SNPs that are associated with the trait of interest. However, as with many multi-SNP testing approaches, kernel machine testing can draw conclusion only at the SNP-set level, and does not directly inform on which one(s) of the identified SNP set is actually driving the associations. A recently proposed procedure, KerNel Iterative Feature Extraction (KNIFE), provides a general framework for incorporating variable selection into kernel machine methods. In this article, we focus on quantitative traits and relatively common SNPs, and adapt the KNIFE procedure to genetic association studies and propose an approach to identify driver SNPs after the application of SKAT to gene set analysis. Our approach accommodates several kernels that are widely used in SNP analysis, such as the linear kernel and the Identity by State (IBS) kernel. The proposed approach provides practically useful utilities to prioritize SNPs, and fills the gap between SNP set analysis and biological functional studies. Both simulation studies and real data application are used to demonstrate the proposed approach. © 2016 WILEY PERIODICALS, INC.

  4. Methodological approach for the assessment of ultrasound reproducibility of cardiac structure and function: a proposal of the study group of Echocardiography of the Italian Society of Cardiology (Ultra Cardia SIC) Part I

    PubMed Central

    2011-01-01

    When applying echo-Doppler imaging for either clinical or research purposes it is very important to select the most adequate modality/technology and choose the most reliable and reproducible measurements. Quality control is a mainstay to reduce variability among institutions and operators and must be obtained by using appropriate procedures for data acquisition, storage and interpretation of echo-Doppler data. This goal can be achieved by employing an echo core laboratory (ECL), with the responsibility for standardizing image acquisition processes (performed at the peripheral echo-labs) and analysis (by monitoring and optimizing the internal intra- and inter-reader variability of measurements). Accordingly, the Working Group of Echocardiography of the Italian Society of Cardiology decided to design standardized procedures for imaging acquisition in peripheral laboratories and reading procedures and to propose a methodological approach to assess the reproducibility of echo-Doppler parameters of cardiac structure and function by using both standard and advanced technologies. A number of cardiologists experienced in cardiac ultrasound was involved to set up an ECL available for future studies involving complex imaging or including echo-Doppler measures as primary or secondary efficacy or safety end-points. The present manuscript describes the methodology of the procedures (imaging acquisition and measurement reading) and provides the documentation of the work done so far to test the reproducibility of the different echo-Doppler modalities (standard and advanced). These procedures can be suggested for utilization also in non referall echocardiographic laboratories as an "inside" quality check, with the aim at optimizing clinical consistency of echo-Doppler data. PMID:21943283

  5. Assessment of modal-pushover-based scaling procedure for nonlinear response history analysis of ordinary standard bridges

    USGS Publications Warehouse

    Kalkan, E.; Kwong, N.

    2012-01-01

    The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case in the central United States) or when high-intensity records are needed (as is the case in San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure was recently developed to determine scale factors for a small number of records such that the scaled records provide accurate and efficient estimates of “true” median structural responses. The adjective “accurate” refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective “efficient” refers to the record-to-record variability of responses. In this paper, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing Ordinary Standard bridges typical of reinforced concrete bridge construction in California. These bridges are the single-bent overpass, multi-span bridge, curved bridge, and skew bridge. As compared with benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the EDPs. Thus, it is a useful tool for scaling ground motions as input to nonlinear RHAs of Ordinary Standard bridges.

  6. Achievability of 3D planned bimaxillary osteotomies: maxilla-first versus mandible-first surgery.

    PubMed

    Liebregts, Jeroen; Baan, Frank; de Koning, Martien; Ongkosuwito, Edwin; Bergé, Stefaan; Maal, Thomas; Xi, Tong

    2017-08-24

    The present study was aimed to investigate the effects of sequencing a two-component surgical procedure for correcting malpositioned jaws (bimaxillary osteotomies); specifically, surgical repositioning of the upper jaw-maxilla, and the lower jaw-mandible. Within a population of 116 patients requiring bimaxillary osteotomies, the investigators analyzed whether there were statistically significant differences in postoperative outcome as measured by concordance with a preoperative digital 3D virtual treatment plan. In one group of subjects (n = 58), the maxillary surgical procedure preceded the mandibular surgery. In the second group (n = 58), the mandibular procedure preceded the maxillary surgical procedure. A semi-automated analysis tool (OrthoGnathicAnalyser) was applied to assess the concordance of the postoperative maxillary and mandibular position with the cone beam CT-based 3D virtual treatment planning in an effort to minimize observer variability. The results demonstrated that in most instances, the maxilla-first surgical approach yielded closer concordance with the 3D virtual treatment plan than a mandibular-first procedure. In selected circumstances, such as a planned counterclockwise rotation of both jaws, the mandible-first sequence resulted in more predictable displacements of the jaws.

  7. Processing techniques for software based SAR processors

    NASA Technical Reports Server (NTRS)

    Leung, K.; Wu, C.

    1983-01-01

    Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.

  8. 45 CFR 1217.4 - Selection procedure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false Selection procedure. 1217.4 Section 1217.4 Public... VISTA VOLUNTEER LEADER § 1217.4 Selection procedure. (a) Nomination. Candidates may be nominated in... Director's review. (b) Selection. VISTA volunteer leaders will be selected by the Regional Director (or his...

  9. High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis

    PubMed Central

    Daye, Z. John; Chen, Jinbo; Li, Hongzhe

    2011-01-01

    Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833

  10. Use of Polyamine Derivatives as Selective Histone Deacetylase Inhibitors

    PubMed Central

    Woster, Patrick M.

    2014-01-01

    Histone acetylation and deacetylation, mediated by histone acetyltransferase and the 11 isoforms of histone deacetylase, play an important role in gene expression. Histone deacetylase inhibitors have found utility in the treatment of cancer by promoting the reexpression of aberrantly silenced genes that code for tumor suppressor factors. It is unclear which of the 11 histone deacetylase isoforms are important in human cancer. We have designed a series of polyaminohydroxamic acid (PAHA) and polyaminobenzamide (PABA) histone deacetylase inhibitors that exhibit selectivity among four histone deacetylase isoforms. Although all of the active inhibitors promote reexpression of tumor suppressor factors, they produce variable cellular effects ranging from stimulation of growth to cytostasis and cytotoxicity. This chapter describes the procedures used to quantify the global and isoform-specific inhibition caused by these inhibitors, and techniques used to measure cellular effects such as reexpression of tumor suppressor proteins and hyperacetylation of histones H3 and H4. Procedures are also described to examine the ability of PAHAs and PABAs to utilize the polyamine transport system and to induce overexpression of the early apoptotic factor annexin A1. PMID:21318894

  11. Meta-Analysis of Inquiry-Based Instruction Research

    NASA Astrophysics Data System (ADS)

    Hasanah, N.; Prasetyo, A. P. B.; Rudyatmi, E.

    2017-04-01

    Inquiry-based instruction in biology has been the focus of educational research conducted by Unnes biology department students in collaboration with their university supervisors. This study aimed to describe the methodological aspects, inquiry teaching methods critically, and to analyse the results claims, of the selected four student research reports, grounded in inquiry, based on the database of Unnes biology department 2014. Four experimental quantitative research of 16 were selected as research objects by purposive sampling technique. Data collected through documentation study was qualitatively analysed regarding methods used, quality of inquiry syntax, and finding claims. Findings showed that the student research was still the lack of relevant aspects of research methodology, namely in appropriate sampling procedures, limited validity tests of all research instruments, and the limited parametric statistic (t-test) not supported previously by data normality tests. Their consistent inquiry syntax supported the four mini-thesis claims that inquiry-based teaching influenced their dependent variables significantly. In other words, the findings indicated that positive claims of the research results were not fully supported by good research methods, and well-defined inquiry procedures implementation.

  12. A methodology to guide the selection of composite materials in a wind turbine rotor blade design process

    NASA Astrophysics Data System (ADS)

    Bortolotti, P.; Adolphs, G.; Bottasso, C. L.

    2016-09-01

    This work is concerned with the development of an optimization methodology for the composite materials used in wind turbine blades. Goal of the approach is to guide designers in the selection of the different materials of the blade, while providing indications to composite manufacturers on optimal trade-offs between mechanical properties and material costs. The method works by using a parametric material model, and including its free parameters amongst the design variables of a multi-disciplinary wind turbine optimization procedure. The proposed method is tested on the structural redesign of a conceptual 10 MW wind turbine blade, its spar caps and shell skin laminates being subjected to optimization. The procedure identifies a blade optimum for a new spar cap laminate characterized by a higher longitudinal Young's modulus and higher cost than the initial one, which however in turn induce both cost and mass savings in the blade. In terms of shell skin, the adoption of a laminate with intermediate properties between a bi-axial one and a tri-axial one also leads to slight structural improvements.

  13. Variable selection in a flexible parametric mixture cure model with interval-censored data.

    PubMed

    Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim

    2016-03-30

    In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. Genetic characterization of fig tree mutants with molecular markers.

    PubMed

    Rodrigues, M G F; Martins, A B G; Desidério, J A; Bertoni, B W; Alves, M C

    2012-08-06

    The fig (Ficus carica L.) is a fruit tree of great world importance and, therefore, the genetic improvement becomes an important field of research for better crops, being necessary to gather information on this species, mainly regarding its genetic variability so that appropriate propagation projects and management are made. The improvement programs of fig trees using conventional procedures in order to obtain new cultivars are rare in many countries, such as Brazil, especially due to the little genetic variability and to the difficulties in obtaining plants from gamete fusion once the wasp Blastophaga psenes, responsible for the natural pollinating, is not found in Brazil. In this way, the mutagenic genetic improvement becomes a solution of it. For this reason, in an experiment conducted earlier, fig plants formed by cuttings treated with gamma ray were selected based on their agronomic characteristics of interest. We determined the genetic variability in these fig tree selections, using RAPD and AFLP molecular markers, comparing them to each other and to the Roxo-de-Valinhos, used as the standard. For the reactions of DNA amplification, 140 RAPD primers and 12 primer combinations for AFLP analysis were used. The selections did not differ genetically between themselves and between them and the Roxo-de-Valinhos cultivar. Techniques that can detect polymorphism between treatments, such as DNA sequencing, must be tested. The phenotypic variation of plants may be due to epigenetic variation, necessitating the use of techniques with methylation-sensitive restriction enzymes.

  15. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    PubMed Central

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging and on dynamic interaction between aging and longevity. We investigated properties of genes related to selected variants and their roles in signaling and metabolic pathways. Results We showed that the use of different QC procedures results in different sets of genetic variants associated with life span. We selected 24 genetic variants negatively associated with life span. We showed that the joint analyses of genetic data at the time of bio-specimen collection and follow up data substantially improved significance of associations of selected 24 SNPs with life span. We also showed that aging related changes in physiological variables and in hidden biomarkers of aging differ for the groups of carriers and non-carriers of selected variants. Conclusions . The results of these analyses demonstrated benefits of using biodemographic models and methods in genetic association studies of these traits. Our findings showed that the absence of a large number of genetic variants with deleterious effects may make substantial contribution to exceptional longevity. These effects are dynamically mediated by a number of physiological variables and hidden biomarkers of aging. The results of these research demonstrated benefits of using integrative statistical models of mortality risks in genetic studies of human aging and longevity. PMID:27773987

  16. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  17. 47 CFR 1.1602 - Designation for random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  18. [Education for patients with fibromyalgia. A systematic review of randomised clinical trials].

    PubMed

    Elizagaray-Garcia, Ignacio; Muriente-Gonzalez, Jorge; Gil-Martinez, Alfonso

    2016-01-16

    To analyse the effectiveness of education about pain, quality of life and functionality in patients with fibromyalgia. The search for articles was carried out in electronic databases. Eligibility criteria were: controlled randomised clinical trials (RCT), published in English and Spanish, that had been conducted on patients with fibromyalgia, in which the therapeutic procedure was based on patient education. Two independent reviewers analysed the methodological quality using the PEDro scale. Five RCT were selected, of which four offered good methodological quality. In three of the studies, patient education, in combination with another intervention based on therapeutic exercise, improved the outcomes in the variables assessing pain and quality of life as compared with the same procedures performed separately. Moreover, an RCT with a high quality methodology showed that patient education activated inhibitory neural pathways capable of lowering the level of pain. The quantitative analysis yields strong-moderate evidence that patient education, in combination with other therapeutic exercise procedures, offers positive results in the variables pain, quality of life and functionality. Patient education in itself has not proved to be effective for pain, quality of life or functionality in patients with fibromyalgia. There is strong evidence, however, of the effectiveness of combining patient education with exercise and active strategies for coping with pain, quality of life and functionality in the short, medium and long term in patients with fibromyalgia.

  19. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  20. Consultant selection guidebook : procedures for selecting consultants for FHWA federal-aid projects and state funded projects. [Rev. 2002

    DOT National Transportation Integrated Search

    2002-01-01

    This Guidebook provides an overview of procedures for consultant selection. The local agencies that intend to request federal and state funds for reimbursement of consultant services should follow specific selection and contracting procedures. These ...

  1. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  2. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  3. 47 CFR 1.1603 - Conduct of random selection.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  4. 47 CFR 1.1604 - Post-selection hearings.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  5. State Variation in Medicaid Reimbursements for Orthopaedic Surgery.

    PubMed

    Lalezari, Ramin M; Pozen, Alexis; Dy, Christopher J

    2018-02-07

    Medicaid reimbursements are determined by each state and are subject to variability. We sought to quantify this variation for commonly performed inpatient orthopaedic procedures. The 10 most commonly performed inpatient orthopaedic procedures, as ranked by the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample, were identified for study. Medicaid reimbursement amounts for those procedures were benchmarked to state Medicare reimbursement amounts in 3 ways: (1) ratio, (2) dollar difference, and (3) dollar difference divided by the relative value unit (RVU) amount. Variability was quantified by determining the range and coefficient of variation for those reimbursement amounts. The range of variability of Medicaid reimbursements among states exceeded $1,500 for all 10 procedures. The coefficients of variation ranged from 0.32 (hip hemiarthroplasty) to 0.57 (posterior or posterolateral lumbar interbody arthrodesis) (a higher coefficient indicates greater variability), compared with 0.07 for Medicare reimbursements for all 10 procedures. Adjusted as a dollar difference between Medicaid and Medicare per RVU, the median values ranged from -$8/RVU (total knee arthroplasty) to -$17/RVU (open reduction and internal fixation of the femur). Variability of Medicaid reimbursement for inpatient orthopaedic procedures among states is substantial. This variation becomes especially remarkable given recent policy shifts toward focusing reimbursements on value.

  6. Sex-related differences in coronary revascularization practices: the perspective from a Canadian queue management project.

    PubMed

    Naylor, C D; Levinton, C M

    1993-10-01

    To assess sex-related differences in coronary revascularization practices in a Canadian setting. Prospective analytic cohort study. Regional referral office in Toronto. A selected but consecutive group of 131 women and 440 men referred by cardiologists for revascularization procedures between Jan. 3, 1989, and June 30, 1991. Coronary artery bypass grafting (CABG) or percutaneous transluminal coronary angioplasty (PTCA). Nurse-coordinators placed the referral with a surgeon or interventional cardiologist at one of three hospitals, who then communicated directly with the referring cardiologist. Symptom status at referral, procedures requested and performed, and time from referral to procedure. Although the women were more likely than the men to have unstable angina at the time of referral (odds ratio [OR] 2.28, 95% confidence interval [CI] 1.38 to 3.79, p = 0.0006), more women than men (16.8% v. 12.1%) were turned down for a procedure. Significant sex-related differences in practice patterns (p < 0.001) persisted after controlling for age or for the referring cardiologists' perception of expected procedural risk. A stepwise multivariate model showed that anatomy was the main determinant of case management; sex was the only other significant variable (p = 0.016). The referring physicians requested CABG more often for men than for women (p = 0.009), and the men accepted for a procedure were much more likely to undergo CABG than the women (OR 2.40, CI 1.47 to 3.93, p = 0.0002). Although the women undergoing CABG waited shorter periods than the men (p = 0.0035), this difference was attributable to their more severe symptoms. In this selected group women had more serious symptoms before referral but were turned down for revascularization more often than men. Reduced use of CABG rather than PTCA largely accounted for the sex-related differences in revascularization. Once accepted for a procedure women had shorter waiting times, which was appropriate given their more severe symptoms.

  7. Building a computer program to support children, parents, and distraction during healthcare procedures.

    PubMed

    Hanrahan, Kirsten; McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W Nick; Zimmerman, M Bridget; Ersig, Anne L

    2012-10-01

    This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children's responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, titled Children, Parents and Distraction, is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure.

  8. 48 CFR 570.305 - Two-phase design-build selection procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...

  9. 48 CFR 570.305 - Two-phase design-build selection procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...

  10. 48 CFR 570.305 - Two-phase design-build selection procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...

  11. 48 CFR 570.305 - Two-phase design-build selection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...

  12. Knowledge Driven Variable Selection (KDVS) – a new approach to enrichment analysis of gene signatures obtained from high–throughput data

    PubMed Central

    2013-01-01

    Background High–throughput (HT) technologies provide huge amount of gene expression data that can be used to identify biomarkers useful in the clinical practice. The most frequently used approaches first select a set of genes (i.e. gene signature) able to characterize differences between two or more phenotypical conditions, and then provide a functional assessment of the selected genes with an a posteriori enrichment analysis, based on biological knowledge. However, this approach comes with some drawbacks. First, gene selection procedure often requires tunable parameters that affect the outcome, typically producing many false hits. Second, a posteriori enrichment analysis is based on mapping between biological concepts and gene expression measurements, which is hard to compute because of constant changes in biological knowledge and genome analysis. Third, such mapping is typically used in the assessment of the coverage of gene signature by biological concepts, that is either score–based or requires tunable parameters as well, limiting its power. Results We present Knowledge Driven Variable Selection (KDVS), a framework that uses a priori biological knowledge in HT data analysis. The expression data matrix is transformed, according to prior knowledge, into smaller matrices, easier to analyze and to interpret from both computational and biological viewpoints. Therefore KDVS, unlike most approaches, does not exclude a priori any function or process potentially relevant for the biological question under investigation. Differently from the standard approach where gene selection and functional assessment are applied independently, KDVS embeds these two steps into a unified statistical framework, decreasing the variability derived from the threshold–dependent selection, the mapping to the biological concepts, and the signature coverage. We present three case studies to assess the usefulness of the method. Conclusions We showed that KDVS not only enables the selection of known biological functionalities with accuracy, but also identification of new ones. An efficient implementation of KDVS was devised to obtain results in a fast and robust way. Computing time is drastically reduced by the effective use of distributed resources. Finally, integrated visualization techniques immediately increase the interpretability of results. Overall, KDVS approach can be considered as a viable alternative to enrichment–based approaches. PMID:23302187

  13. Calibration of ultrasonic power output in water, ethanol and sodium polytungstate

    NASA Astrophysics Data System (ADS)

    Mentler, Axel; Schomakers, Jasmin; Kloss, Stefanie; Zechmeister-Boltenstern, Sophie; Schuller, Reinhard; Mayer, Herwig

    2017-10-01

    Ultrasonic power is the main variable that forms the basis for many soil disaggregation experiments. Thus, a procedure for the rapid determination of this variable has been developed and is described in this article. Calorimetric experiments serve to measure specific heat capacity and ultrasonic power. Ultrasonic power is determined experimentally for deionised water, 30% ethanol and sodium polytungstate with a density of 1.6 g cm-3 and 1.8 g cm-3. All experiments are performed with a pre-selected ultrasonic probe vibration amplitude. Under these conditions, it was found that the emitted ultrasonic power was comparable in the four fluids. It is suggested, however, to perform calibration experiments prior to dispersion experiments, since the used fluid, as well as the employed ultrasonic equipment, may influence the power output.

  14. A Random Forest Based Risk Model for Reliable and Accurate Prediction of Receipt of Transfusion in Patients Undergoing Percutaneous Coronary Intervention

    PubMed Central

    Gurm, Hitinder S.; Kooiman, Judith; LaLonde, Thomas; Grines, Cindy; Share, David; Seth, Milan

    2014-01-01

    Background Transfusion is a common complication of Percutaneous Coronary Intervention (PCI) and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. Methods Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC), with net reclassification improvement (NRI) used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. Results Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70%) were randomly selected for training the models, and 30,966 (30%) for validation. The models demonstrated excellent calibration and discrimination (AUC: full model  = 0.888 (95% CI 0.877–0.899), reduced model AUC = 0.880 (95% CI, 0.868–0.892), p for difference 0.003, NRI = 2.77%, p = 0.007). Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. Conclusions The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion). This risk prediction algorithm may prove useful for both bed side clinical decision making and risk adjustment for assessment of quality. PMID:24816645

  15. Left Atrial Appendage Closure for Stroke Prevention: Devices, Techniques, and Efficacy.

    PubMed

    Iskandar, Sandia; Vacek, James; Lavu, Madhav; Lakkireddy, Dhanunjaya

    2016-05-01

    Left atrial appendage closure can be performed either surgically or percutaneously. Surgical approaches include direct suture, excision and suture, stapling, and clipping. Percutaneous approaches include endocardial, epicardial, and hybrid endocardial-epicardial techniques. Left atrial appendage anatomy is highly variable and complex; therefore, preprocedural imaging is crucial to determine device selection and sizing, which contribute to procedural success and reduction of complications. Currently, the WATCHMAN is the only device that is approved for left atrial appendage closure in the United States. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Technical Performance as a Predictor of Clinical Outcomes in Laparoscopic Gastric Cancer Surgery.

    PubMed

    Fecso, Andras B; Bhatti, Junaid A; Stotland, Peter K; Quereshy, Fayez A; Grantcharov, Teodor P

    2018-03-23

    The purpose of this study was to evaluate the relationship between technical performance and patient outcomes in laparoscopic gastric cancer surgery. Laparoscopic gastrectomy for cancer is an advanced procedure with high rate of postoperative morbidity and mortality. Many variables including patient, disease, and perioperative management factors have been shown to impact postoperative outcomes; however, the role of surgical performance is insufficiently investigated. A retrospective review was performed for all patients who had undergone laparoscopic gastrectomy for cancer at 3 teaching institutions between 2009 and 2015. Patients with available, unedited video-recording of their procedure were included in the study. Video files were rated for technical performance, using Objective Structured Assessments of Technical Skills (OSATS) and Generic Error Rating Tool instruments. The main outcome variable was major short-term complications. The effect of technical performance on patient outcomes was assessed using logistic regression analysis with backward selection strategy. Sixty-one patients with available video recordings were included in the study. The overall complication rate was 29.5%. The mean Charlson comorbidity index, type of procedure, and the global OSATS score were included in the final predictive model. Lower performance score (OSATS ≤29) remained an independent predictor for major short-term outcomes (odds ratio 6.49), while adjusting for comorbidities and type of procedure. Intraoperative technical performance predicts major short-term outcomes in laparoscopic gastrectomy for cancer. Ongoing assessment and enhancement of surgical skills using modern, evidence-based strategies might improve short-term patient outcomes. Future work should focus on developing and studying the effectiveness of such interventions in laparoscopic gastric cancer surgery.

  17. Hemispherectomy for catastrophic epilepsy in infants.

    PubMed

    González-Martínez, Jorge A; Gupta, Ajay; Kotagal, Prakash; Lachhwani, Deepak; Wyllie, Elaine; Lüders, Hans O; Bingaman, William E

    2005-09-01

    To report our experience with hemispherectomy in the treatment of catastrophic epilepsy in children younger than 2 years. In a single-surgeon series, we performed a retrospective analysis of 18 patients with refractory epilepsy undergoing hemispherectomy (22 procedures). Three different surgical techniques were performed: anatomic hemispherectomy, functional hemispherectomy, and modified anatomic hemispherectomy. Pre- and postoperative evaluations included extensive video-EEG monitoring, magnetic resonance imaging, and positron emission tomography scanning. Seizure outcome was correlated with possible variables associated with persistent postoperative seizures. The Generalized Estimation Equation (GEE) and the Barnard's exact test were used as statistical methods. The follow-up was 12-74 months (mean, 34.8 months). Mean weight was 9.3 kg (6-12.3 kg). The population age was 3-22 months (mean, 11.7 months). Thirteen (66%) patients were seizure free, and four patients had >90% reduction of the seizure frequency and intensity. The overall complication rate was 16.7%. No deaths occurred. Twelve (54.5%) of 22 procedures resulted in incomplete disconnection, evidenced on postoperative images. Type of surgical procedure, diagnosis categories, persistence of insular cortex, and bilateral interictal epileptiform activity were not associated with persistent seizures after surgery. Incomplete disconnection was the only variable statistically associated with persistent seizures after surgery (p<0.05). Hemispherectomy for seizure control provides excellent and dramatic results with a satisfactory complication rate. Our results support the concept that early surgery should be indicated in highly selected patients with catastrophic epilepsy. Safety factors such as an expert team in the pediatric intensive care unit, neuroanesthesia, and a pediatric epilepsy surgeon familiar with the procedure are mandatory.

  18. Analyzing the effect of selected control policy measures and sociodemographic factors on alcoholic beverage consumption in Europe within the AMPHORA project: statistical methods.

    PubMed

    Baccini, Michela; Carreras, Giulia

    2014-10-01

    This paper describes the methods used to investigate variations in total alcoholic beverage consumption as related to selected control intervention policies and other socioeconomic factors (unplanned factors) within 12 European countries involved in the AMPHORA project. The analysis presented several critical points: presence of missing values, strong correlation among the unplanned factors, long-term waves or trends in both the time series of alcohol consumption and the time series of the main explanatory variables. These difficulties were addressed by implementing a multiple imputation procedure for filling in missing values, then specifying for each country a multiple regression model which accounted for time trend, policy measures and a limited set of unplanned factors, selected in advance on the basis of sociological and statistical considerations are addressed. This approach allowed estimating the "net" effect of the selected control policies on alcohol consumption, but not the association between each unplanned factor and the outcome.

  19. Evaluation of standardized and applied variables in predicting treatment outcomes of polytrauma patients.

    PubMed

    Aksamija, Goran; Mulabdic, Adi; Rasic, Ismar; Muhovic, Samir; Gavric, Igor

    2011-01-01

    Polytrauma is defined as an injury where they are affected by at least two different organ systems or body, with at least one life-threatening injuries. Given the multilevel model care of polytrauma patients within KCUS are inevitable weaknesses in the management of this category of patients. To determine the dynamics of existing procedures in treatment of polytrauma patients on admission to KCUS, and based on statistical analysis of variables applied to determine and define the factors that influence the final outcome of treatment, and determine their mutual relationship, which may result in eliminating the flaws in the approach to the problem. The study was based on 263 polytrauma patients. Parametric and non-parametric statistical methods were used. Basic statistics were calculated, based on the calculated parameters for the final achievement of research objectives, multicoleration analysis, image analysis, discriminant analysis and multifactorial analysis were used. From the universe of variables for this study we selected sample of n = 25 variables, of which the first two modular, others belong to the common measurement space (n = 23) and in this paper defined as a system variable methods, procedures and assessments of polytrauma patients. After the multicoleration analysis, since the image analysis gave a reliable measurement results, we started the analysis of eigenvalues, that is defining the factors upon which they obtain information about the system solve the problem of the existing model and its correlation with treatment outcome. The study singled out the essential factors that determine the current organizational model of care, which may affect the treatment and better outcome of polytrauma patients. This analysis has shown the maximum correlative relationships between these practices and contributed to development guidelines that are defined by isolated factors.

  20. Predicting the graft survival for heart-lung transplantation patients: an integrated data mining methodology.

    PubMed

    Oztekin, Asil; Delen, Dursun; Kong, Zhenyu James

    2009-12-01

    Predicting the survival of heart-lung transplant patients has the potential to play a critical role in understanding and improving the matching procedure between the recipient and graft. Although voluminous data related to the transplantation procedures is being collected and stored, only a small subset of the predictive factors has been used in modeling heart-lung transplantation outcomes. The previous studies have mainly focused on applying statistical techniques to a small set of factors selected by the domain-experts in order to reveal the simple linear relationships between the factors and survival. The collection of methods known as 'data mining' offers significant advantages over conventional statistical techniques in dealing with the latter's limitations such as normality assumption of observations, independence of observations from each other, and linearity of the relationship between the observations and the output measure(s). There are statistical methods that overcome these limitations. Yet, they are computationally more expensive and do not provide fast and flexible solutions as do data mining techniques in large datasets. The main objective of this study is to improve the prediction of outcomes following combined heart-lung transplantation by proposing an integrated data-mining methodology. A large and feature-rich dataset (16,604 cases with 283 variables) is used to (1) develop machine learning based predictive models and (2) extract the most important predictive factors. Then, using three different variable selection methods, namely, (i) machine learning methods driven variables-using decision trees, neural networks, logistic regression, (ii) the literature review-based expert-defined variables, and (iii) common sense-based interaction variables, a consolidated set of factors is generated and used to develop Cox regression models for heart-lung graft survival. The predictive models' performance in terms of 10-fold cross-validation accuracy rates for two multi-imputed datasets ranged from 79% to 86% for neural networks, from 78% to 86% for logistic regression, and from 71% to 79% for decision trees. The results indicate that the proposed integrated data mining methodology using Cox hazard models better predicted the graft survival with different variables than the conventional approaches commonly used in the literature. This result is validated by the comparison of the corresponding Gains charts for our proposed methodology and the literature review based Cox results, and by the comparison of Akaike information criteria (AIC) values received from each. Data mining-based methodology proposed in this study reveals that there are undiscovered relationships (i.e. interactions of the existing variables) among the survival-related variables, which helps better predict the survival of the heart-lung transplants. It also brings a different set of variables into the scene to be evaluated by the domain-experts and be considered prior to the organ transplantation.

  1. Determinants of job satisfaction for novice nurse managers employed in hospitals.

    PubMed

    Djukic, Maja; Jun, Jin; Kovner, Christine; Brewer, Carol; Fletcher, Jason

    Numbering close to 300,000 nurse managers represent the largest segment of the health care management workforce. Their effectiveness is, in part, influenced by their job satisfaction. We examined factors associated with job satisfaction of novice frontline nurse managers. We used a cross-sectional, correlational survey design. The sample consisted of responders to the fifth wave of a multiyear study of new nurses in 2013 (N = 1,392; response rate of 69%) who reported working as managers (n = 209). The parent study sample consisted of registered nurses who were licensed for the first time by exam 6-18 months prior in 1 of 51 selected metropolitan statistical areas and 9 rural areas across 34 U.S. states and the District of Columbia. We examined bivariate correlations between job satisfaction and 31 personal and structural variables. All variables significantly related to job satisfaction in bivariate analysis were included in a multivariate linear regression model. In addition, we tested the interaction effects of procedural justice and negative affectivity, autonomy, and organizational constraints on job satisfaction. The Cronbach's alphas for all multi-item scales ranged from .74 to .96. In the multivariate analysis, negative affectivity (β = -.169; p = .006) and procedural justice (β = .210; p = .016) were significantly correlated with job satisfaction. The combination of predictors in the model accounted for half of the variability in job satisfaction ratings (R = .51, adjusted R = .47; p <. 001). Health care executives who want to cultivate an effective novice frontline nurse manager workforce can best ensure their satisfaction by creating an organization with strong procedural justice. This could be achieved by involving managers in decision-making processes and ensuring transparency about how decisions that affect nursing are made.

  2. Selecting a Variable for Predicting the Diagnosis of PTB Patients From Comparison of Chest X-ray Images

    NASA Astrophysics Data System (ADS)

    Mohd. Rijal, Omar; Mohd. Noor, Norliza; Teng, Shee Lee

    A statistical method of comparing two digital chest radiographs for Pulmonary Tuberculosis (PTB) patients has been proposed. After applying appropriate image registration procedures, a selected subset of each image is converted to an image histogram (or box plot). Comparing two chest X-ray images is equivalent to the direct comparison of the two corresponding histograms. From each histogram, eleven percentiles (of image intensity) are calculated. The number of percentiles that shift to the left (NLSP) when second image is compared to the first has been shown to be an indicator of patients` progress. In this study, the values of NLSP is to be compared with the actual diagnosis (Y) of several medical practitioners. A logistic regression model is used to study the relationship between NLSP and Y. This study showed that NLSP may be used as an alternative or second opinion for Y. The proposed regression model also show that important explanatory variables such as outcomes of sputum test (Z) and degree of image registration (W) may be omitted when estimating Y-values.

  3. VVV Survey Search for Habitable Planets around M Dwarfs

    NASA Astrophysics Data System (ADS)

    Minniti, Dante

    2015-08-01

    VISTA Variables in the Vía Láctea (VVV) is a public ESO near- infrared (near-IR) variability survey aimed at scanning the Milky Way Bulge and an adjacent section of the mid-plane. The survey covers an area of 562 sqdeg in the Galactic bulge and the southern disk, containing a billion point sources. In this work we discuss the selection of nearby M-type dwarf stars using multicolor cuts. The ZYJHKs photometry allows an accurate estimation of the spectral types of the M-dwarf candidates. Our procedure is applied for fields located far from the Galactic center where the photometric quality is best. The results of this search covering 15 sqdeg allow us to estimate the total number of M-dwarfs that can be photometrically monitored in the VVV database. In addition, we analyze the light curves of the ~10000 best candidate M-dwarf stars searching for extrasolar planetary transits. In this poster we present the light curves of a hundred good transit candidates, and select those that lie in the HZ around their parent stars.

  4. Longitudinal-control design approach for high-angle-of-attack aircraft

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.; Proffitt, Melissa S.

    1993-01-01

    This paper describes a control synthesis methodology that emphasizes a variable-gain output feedback technique that is applied to the longitudinal channel of a high-angle-of-attack aircraft. The aircraft is a modified F/A-18 aircraft with thrust-vectored controls. The flight regime covers a range up to a Mach number of 0.7; an altitude range from 15,000 to 35,000 ft; and an angle-of-attack (alpha) range up to 70 deg, which is deep into the poststall region. A brief overview is given of the variable-gain mathematical formulation as well as a description of the discrete control structure used for the feedback controller. This paper also presents an approximate design procedure with relationships for the optimal weights for the selected feedback control structure. These weights are selected to meet control design guidelines for high-alpha flight controls. Those guidelines that apply to the longitudinal-control design are also summarized. A unique approach is presented for the feed-forward command generator to obtain smooth transitions between load factor and alpha commands. Finally, representative linear analysis results and nonlinear batch simulation results are provided.

  5. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  6. Measuring Variable Refractive Indices Using Digital Photos

    ERIC Educational Resources Information Center

    Lombardi, S.; Monroy, G.; Testa, I.; Sassi, E.

    2010-01-01

    A new procedure for performing quantitative measurements in teaching optics is presented. Application of the procedure to accurately measure the rate of change of the variable refractive index of a water-denatured alcohol mixture is described. The procedure can also be usefully exploited for measuring the constant refractive index of distilled…

  7. Antibiosis and bmyB Gene Presence As Prevalent Traits for the Selection of Efficient Bacillus Biocontrol Agents against Crown Gall Disease.

    PubMed

    Frikha-Gargouri, Olfa; Ben Abdallah, Dorra; Bhar, Ilhem; Tounsi, Slim

    2017-01-01

    This study aimed to improve the screening method for the selection of Bacillus biocontrol agents against crown gall disease. The relationship between the strain biocontrol ability and their in vitro studied traits was investigated to identify the most important factors to be considered for the selection of effective biocontrol agents. In fact, previous selection procedure relying only on in vitro antibacterial activity was shown to be not suitable in some cases. A direct plant-protection strategy was performed to screen the 32 Bacillus biocontrol agent candidates. Moreover, potential in vitro biocontrol traits were investigated including biofilm formation, motility, hemolytic activity, detection of lipopeptide biosynthetic genes ( sfp, ituC and bmyB ) and production of antibacterial compounds. The obtained results indicated high correlations of the efficiency of the biocontrol with the reduction of gall weight ( p = 0.000) and the antibacterial activity in vitro ( p = 0.000). Moreover, there was strong correlations of the efficiency of the biocontrol ( p = 0.004) and the reduction in gall weight ( p = 0.000) with the presence of the bmyB gene. This gene directs the synthesis of the lipopeptide bacillomycin belonging to the iturinic family of lipopeptides. These results were also confirmed by the two-way hierarchical cluster analysis and the correspondence analysis showing the relatedness of these four variables. According to the obtained results a new screening procedure of Bacillus biocontrol agents against crown gall disease could be advanced consisting on two step selection procedure. The first consists on selecting strains with high antibacterial activity in vitro or those harbouring the bmyB gene. Further selection has to be performed on tomato plants in vivo . Moreover, based on the results of the biocontrol assay, five potent strains exhibiting high biocontrol abilities were selected. They were identified as Bacillus subtilis or Bacillus amyloliquefaciens . These strains were found to produce either surfactin or surfactin and iturin lipopeptides. In conclusion, our study presented a new and effective method to evaluate the biocontrol ability of antagonistic Bacillus strains against crown gall disease that could increase the efficiency of screening method of biocontrol agents. Besides, the selected strains could be used as novel biocontrol agents against pathogenic Agrobacterium tumefaciens strains.

  8. Antibiosis and bmyB Gene Presence As Prevalent Traits for the Selection of Efficient Bacillus Biocontrol Agents against Crown Gall Disease

    PubMed Central

    Frikha-Gargouri, Olfa; Ben Abdallah, Dorra; Bhar, Ilhem; Tounsi, Slim

    2017-01-01

    This study aimed to improve the screening method for the selection of Bacillus biocontrol agents against crown gall disease. The relationship between the strain biocontrol ability and their in vitro studied traits was investigated to identify the most important factors to be considered for the selection of effective biocontrol agents. In fact, previous selection procedure relying only on in vitro antibacterial activity was shown to be not suitable in some cases. A direct plant-protection strategy was performed to screen the 32 Bacillus biocontrol agent candidates. Moreover, potential in vitro biocontrol traits were investigated including biofilm formation, motility, hemolytic activity, detection of lipopeptide biosynthetic genes (sfp, ituC and bmyB) and production of antibacterial compounds. The obtained results indicated high correlations of the efficiency of the biocontrol with the reduction of gall weight (p = 0.000) and the antibacterial activity in vitro (p = 0.000). Moreover, there was strong correlations of the efficiency of the biocontrol (p = 0.004) and the reduction in gall weight (p = 0.000) with the presence of the bmyB gene. This gene directs the synthesis of the lipopeptide bacillomycin belonging to the iturinic family of lipopeptides. These results were also confirmed by the two-way hierarchical cluster analysis and the correspondence analysis showing the relatedness of these four variables. According to the obtained results a new screening procedure of Bacillus biocontrol agents against crown gall disease could be advanced consisting on two step selection procedure. The first consists on selecting strains with high antibacterial activity in vitro or those harbouring the bmyB gene. Further selection has to be performed on tomato plants in vivo. Moreover, based on the results of the biocontrol assay, five potent strains exhibiting high biocontrol abilities were selected. They were identified as Bacillus subtilis or Bacillus amyloliquefaciens. These strains were found to produce either surfactin or surfactin and iturin lipopeptides. In conclusion, our study presented a new and effective method to evaluate the biocontrol ability of antagonistic Bacillus strains against crown gall disease that could increase the efficiency of screening method of biocontrol agents. Besides, the selected strains could be used as novel biocontrol agents against pathogenic Agrobacterium tumefaciens strains. PMID:28855909

  9. Non-targeted 1H NMR fingerprinting and multivariate statistical analyses for the characterisation of the geographical origin of Italian sweet cherries.

    PubMed

    Longobardi, F; Ventrella, A; Bianco, A; Catucci, L; Cafagna, I; Gallo, V; Mastrorilli, P; Agostiano, A

    2013-12-01

    In this study, non-targeted (1)H NMR fingerprinting was used in combination with multivariate statistical techniques for the classification of Italian sweet cherries based on their different geographical origins (Emilia Romagna and Puglia). As classification techniques, Soft Independent Modelling of Class Analogy (SIMCA), Partial Least Squares Discriminant Analysis (PLS-DA), and Linear Discriminant Analysis (LDA) were carried out and the results were compared. For LDA, before performing a refined selection of the number/combination of variables, two different strategies for a preliminary reduction of the variable number were tested. The best average recognition and CV prediction abilities (both 100.0%) were obtained for all the LDA models, although PLS-DA also showed remarkable performances (94.6%). All the statistical models were validated by observing the prediction abilities with respect to an external set of cherry samples. The best result (94.9%) was obtained with LDA by performing a best subset selection procedure on a set of 30 principal components previously selected by a stepwise decorrelation. The metabolites that mostly contributed to the classification performances of such LDA model, were found to be malate, glucose, fructose, glutamine and succinate. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR)

    PubMed Central

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-01-01

    Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100

  11. Sampling in health geography: reconciling geographical objectives and probabilistic methods. An example of a health survey in Vientiane (Lao PDR).

    PubMed

    Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard

    2007-06-01

    Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.

  12. MnDOT thin whitetopping selection procedures : final report.

    DOT National Transportation Integrated Search

    2017-06-01

    This report provides an integrated selection procedure for evaluating whether an existing hot-mix asphalt (HMA) pavement is an appropriate candidate for a bonded concrete overlay of asphalt (BCOA). The selection procedure includes (1) a desk review, ...

  13. Justifying scale type for a latent variable: Formative or reflective?

    NASA Astrophysics Data System (ADS)

    Liu, Hao; Bahron, Arsiah; Bagul, Awangku Hassanal Bahar Pengiran

    2015-12-01

    The study attempted to explore the possibilities to create a procedure at the experimental level to double confirm whether manifest variables scale type is formative or reflective. Now, the criteria of making such a decision are heavily depended on researchers' judgment at the conceptual and operational level. The study created an experimental procedure that seems could double confirm the decisions from the conceptual and operational level judgments. The experimental procedure includes the following tests, Variance Inflation Factor (VIF), Tolerance (TOL), Ridge Regression, Cronbach's alpha, Dillon-Goldstein's rho, and first and second eigenvalue. The procedure considers manifest variables' both multicollinearity and consistency. As the result, the procedure received the same judgment with the carefully established decision making at the concept and operational level.

  14. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  15. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  16. Did we choose the best one? A new site selection approach based on exposure and uptake potential for waste incineration.

    PubMed

    Demirarslan, K Onur; Korucu, M Kemal; Karademir, Aykan

    2016-08-01

    Ecological problems arising after the construction and operation of a waste incineration plant generally originate from incorrect decisions made during the selection of the location of the plant. The main objective of this study is to investigate how the selection method for the location of a new municipal waste incineration plant can be improved by using a dispersion modelling approach supported by geographical information systems and multi-criteria decision analysis. Considering this aim, the appropriateness of the current location of an existent plant was assessed by applying a pollution dispersion model. Using this procedure, the site ranking for a total of 90 candidate locations and the site of the existing incinerator were determined by a new location selection practice and the current place of the plant was evaluated by ANOVA and Tukey tests. This ranking, made without the use of modelling approaches, was re-evaluated based on the modelling of various variables, including the concentration of pollutants, population and population density, demography, temporality of meteorological data, pollutant type, risk formation type by CALPUFF and re-ranking the results. The findings clearly indicate the impropriety of the location of the current plant, as the pollution distribution model showed that its location was the fourth-worst choice among 91 possibilities. It was concluded that the location selection procedures for waste incinerators should benefit from the improvements obtained by the articulation of pollution dispersion studies combined with the population density data to obtain the most suitable location. © The Author(s) 2016.

  17. Prediction of Success in External Cephalic Version under Tocolysis: Still a Challenge.

    PubMed

    Vaz de Macedo, Carolina; Clode, Nuno; Mendes da Graça, Luís

    2015-01-01

    External cephalic version is a procedure of fetal rotation to a cephalic presentation through manoeuvres applied to the maternal abdomen. There are several prognostic factors described in literature for external cephalic version success and prediction scores have been proposed, but their true implication in clinical practice is controversial. We aim to identify possible factors that could contribute to the success of an external cephalic version attempt in our population. We retrospectively examined 207 consecutive external cephalic version attempts under tocolysis conducted between January 1997 and July 2012. We consulted the department's database for the following variables: race, age, parity, maternal body mass index, gestational age, estimated fetal weight, breech category, placental location and amniotic fluid index. We performed descriptive and analytical statistics for each variable and binary logistic regression. External cephalic version was successful in 46.9% of cases (97/207). None of the included variables was associated with the outcome of external cephalic version attempts after adjustment for confounding factors. We present a success rate similar to what has been previously described in literature. However, in contrast to previous authors, we could not associate any of the analysed variables with success of the external cephalic version attempt. We believe this discrepancy is partly related to the type of statistical analysis performed. Even though there are numerous prognostic factors identified for the success in external cephalic version, care must be taken when counselling and selecting patients for this procedure. The data obtained suggests that external cephalic version should continue being offered to all eligible patients regardless of prognostic factors for success.

  18. 41 CFR 60-3.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: Relationship between use of selection procedures and discrimination. 60-3.3 Section 60-3.3 Public Contracts and... PROGRAMS, EQUAL EMPLOYMENT OPPORTUNITY, DEPARTMENT OF LABOR 3-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.3 Discrimination defined: Relationship between use of selection...

  19. Procedures for generation and reduction of linear models of a turbofan engine

    NASA Technical Reports Server (NTRS)

    Seldner, K.; Cwynar, D. S.

    1978-01-01

    A real time hybrid simulation of the Pratt & Whitney F100-PW-F100 turbofan engine was used for linear-model generation. The linear models were used to analyze the effect of disturbances about an operating point on the dynamic performance of the engine. A procedure that disturbs, samples, and records the state and control variables was developed. For large systems, such as the F100 engine, the state vector is large and may contain high-frequency information not required for control. This, reducing the full-state to a reduced-order model may be a practicable approach to simplifying the control design. A reduction technique was developed to generate reduced-order models. Selected linear and nonlinear output responses to exhaust-nozzle area and main-burner fuel flow disturbances are presented for comparison.

  20. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  1. Psychological Selection of NASA Astronauts for International Space Station Missions

    NASA Technical Reports Server (NTRS)

    Galarza, Laura

    1999-01-01

    During the upcoming manned International Space Station (ISS) missions, astronauts will encounter the unique conditions of living and working with a multicultural crew in a confined and isolated space environment. The environmental, social, and mission-related challenges of these missions will require crewmembers to emphasize effective teamwork, leadership, group living and self-management to maintain the morale and productivity of the crew. The need for crew members to possess and display skills and behaviors needed for successful adaptability to ISS missions led us to upgrade the tools and procedures we use for astronaut selection. The upgraded tools include personality and biographical data measures. Content and construct-related validation techniques were used to link upgraded selection tools to critical skills needed for ISS missions. The results of these validation efforts showed that various personality and biographical data variables are related to expert and interview ratings of critical ISS skills. Upgraded and planned selection tools better address the critical skills, demands, and working conditions of ISS missions and facilitate the selection of astronauts who will more easily cope and adapt to ISS flights.

  2. Prediction of changes due to mandibular autorotation following miniplate-anchored intrusion of maxillary posterior teeth in open bite cases.

    PubMed

    Kassem, Hassan E; Marzouk, Eiman S

    2018-05-14

    Prediction of the treatment outcome of various orthodontic procedures is an essential part of treatment planning. Using skeletal anchorage for intrusion of posterior teeth is a relatively novel procedure for the treatment of anterior open bite in long-faced subjects. Data were analyzed from lateral cephalometric radiographs of a cohort of 28 open bite adult subjects treated with intrusion of the maxillary posterior segment with zygomatic miniplate anchorage. Mean ratios and regression equations were calculated for selected variables before and after intrusion. Relative to molar intrusion, there was approximately 100% vertical change of the hard and soft tissue mention and 80% horizontal change of the hard and soft tissue pogonion. The overbite deepened two folds with 60% increase in overjet. The lower lip moved forward about 80% of the molar intrusion. Hard tissue pogonion and mention showed the strongest correlations with molar intrusion. There was a general agreement between regression equations and mean ratios at 3 mm molar intrusion. This study attempted to provide the clinician with a tool to predict the changes in key treatment variables following skeletally anchored maxillary molar intrusion and autorotation of the mandible.

  3. Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Deng, Hui; Ansley, Timothy; Chang, Hua-Hua

    2010-01-01

    In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…

  4. On the use of variable coherence in inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Baleine, Erwan

    Even though most of the properties of optical fields, such as wavelength, polarization, wavefront curvature or angular spectrum, have been commonly manipulated in a variety of remote sensing procedures, controlling the degree of coherence of light did not find wide applications until recently. Since the emergence of optical coherence tomography, a growing number of scattering techniques have relied on temporal coherence gating which provides efficient target selectivity in a way achieved only by bulky short pulse measurements. The spatial counterpart of temporal coherence, however, has barely been exploited in sensing applications. This dissertation examines, in different scattering regimes, a variety of inverse scattering problems based on variable spatial coherence gating. Within the framework of the radiative transfer theory, this dissertation demonstrates that the short range correlation properties of a medium under test can be recovered by varying the size of the coherence volume of an illuminating beam. Nonetheless, the radiative transfer formalism does not account for long range correlations and current methods for retrieving the correlation function of the complex susceptibility require cumbersome cross-spectral density measurements. Instead, a variable coherence tomographic procedure is proposed where spatial coherence gating is used to probe the structural properties of single scattering media over an extended volume and with a very simple detection system. Enhanced backscattering is a coherent phenomenon that survives strong multiple scattering. The variable coherence tomography approach is extended in this context to diffusive media and it is demonstrated that specific photon trajectories can be selected in order to achieve depth-resolved sensing. Probing the scattering properties of shallow and deeper layers is of considerable interest in biological applications such as diagnosis of skin related diseases. The spatial coherence properties of an illuminating field can be manipulated over dimensions much larger than the wavelength thus providing a large effective sensing area. This is a practical advantage over many near-field microscopic techniques, which offer a spatial resolution beyond the classical diffraction limit but, at the expense of scanning a probe over a large area of a sample which is time consuming, and, sometimes, practically impossible. Taking advantage of the large field of view accessible when using the spatial coherence gating, this dissertation introduces the principle of variable coherence scattering microscopy. In this approach, a subwavelength resolution is achieved from simple far-zone intensity measurements by shaping the degree of spatial coherence of an evanescent field. Furthermore, tomographic techniques based on spatial coherence gating are especially attractive because they rely on simple detection schemes which, in principle, do not require any optical elements such as lenses. To demonstrate this capability, a correlated lensless imaging method is proposed and implemented, where both amplitude and phase information of an object are obtained by varying the degree of spatial coherence of the incident beam. Finally, it should be noted that the idea of using the spatial coherence properties of fields in a tomographic procedure is applicable to any type of electromagnetic radiation. Operating on principles of statistical optics, these sensing procedures can become alternatives for various target detection schemes, cutting-edge microscopies or x-ray imaging methods.

  5. Engineering Students Designing a Statistical Procedure for Quantifying Variability

    ERIC Educational Resources Information Center

    Hjalmarson, Margret A.

    2007-01-01

    The study examined first-year engineering students' responses to a statistics task that asked them to generate a procedure for quantifying variability in a data set from an engineering context. Teams used technological tools to perform computations, and their final product was a ranking procedure. The students could use any statistical measures,…

  6. 78 FR 72878 - Integration of Variable Energy Resources; Notice Of Filing Procedures for Order No. 764...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-04

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. RM10-11-000] Integration of Variable Energy Resources; Notice Of Filing Procedures for Order No. 764 Electronic Compliance Filings Take notice of the following filing procedures with respect to compliance obligations in Integration of...

  7. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches.

    PubMed

    Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.

  8. Patient selection, echocardiographic screening and treatment strategies for interventional tricuspid repair using the edge-to-edge repair technique.

    PubMed

    Hausleiter, Jörg; Braun, Daniel; Orban, Mathias; Latib, Azeem; Lurz, Philipp; Boekstegers, Peter; von Bardeleben, Ralph Stephan; Kowalski, Marek; Hahn, Rebecca T; Maisano, Francesco; Hagl, Christian; Massberg, Steffen; Nabauer, Michael

    2018-04-24

    Severe tricuspid regurgitation (TR) has long been neglected despite its well known association with mortality. While surgical mortality rates remain high in isolated tricuspid valve surgery, interventional TR repair is rapidly evolving as an alternative to cardiac surgery in selected patients at high surgical risk. Currently, interventional edge-to-edge repair is the most frequently applied technique for TR repair even though the device has not been developed for this particular indication. Due to the inherent differences in tricuspid and mitral valve anatomy and pathology, percutaneous repair of the tricuspid valve is challenging due to a variety of factors including the complexity and variability of tricuspid valve anatomy, echocardiographic visibility of the valve leaflets, and device steering to the tricuspid valve. Furthermore, it remains to be clarified which patients are suitable for a percutaneous tricuspid repair and which features predict a successful procedure. On the basis of the available experience, we describe criteria for patient selection including morphological valve features, a standardized process for echocardiographic screening, and a strategy for clip placement. These criteria will help to achieve standardization of valve assessment and the procedural approach, and to further develop interventional tricuspid valve repair using either currently available devices or dedicated tricuspid edge-to-edge repair devices in the future. In summary, this manuscript will provide guidance for patient selection and echocardiographic screening when considering edge-to-edge repair for severe TR.

  9. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  10. Optimal Combinations of Diagnostic Tests Based on AUC.

    PubMed

    Huang, Xin; Qin, Gengsheng; Fang, Yixin

    2011-06-01

    When several diagnostic tests are available, one can combine them to achieve better diagnostic accuracy. This article considers the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a nonparametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests. The proposed methods are examined through simulation studies and applications to three real examples. © 2010, The International Biometric Society.

  11. Final-state QED multipole radiation in antenna parton showers

    NASA Astrophysics Data System (ADS)

    Kleiss, Ronald; Verheyen, Rob

    2017-11-01

    We present a formalism for a fully coherent QED parton shower. The complete multipole structure of photonic radiation is incorporated in a single branching kernel. The regular on-shell 2 → 3 kinematic picture is kept intact by dividing the radiative phase space into sectors, allowing for a definition of the ordering variable that is similar to QCD antenna showers. A modified version of the Sudakov veto algorithm is discussed that increases performance at the cost of the introduction of weighted events. Due to the absence of a soft singularity, the formalism for photon splitting is very similar to the QCD analogon of gluon splitting. However, since no color structure is available to guide the selection of a spectator, a weighted selection procedure from all available spectators is introduced.

  12. Estimation of effective hydrologic properties of soils from observations of vegetation density

    NASA Technical Reports Server (NTRS)

    Tellers, T. E.; Eagleson, P. S.

    1980-01-01

    A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.

  13. Adaptive sampling in behavioral surveys.

    PubMed

    Thompson, S K

    1997-01-01

    Studies of populations such as drug users encounter difficulties because the members of the populations are rare, hidden, or hard to reach. Conventionally designed large-scale surveys detect relatively few members of the populations so that estimates of population characteristics have high uncertainty. Ethnographic studies, on the other hand, reach suitable numbers of individuals only through the use of link-tracing, chain referral, or snowball sampling procedures that often leave the investigators unable to make inferences from their sample to the hidden population as a whole. In adaptive sampling, the procedure for selecting people or other units to be in the sample depends on variables of interest observed during the survey, so the design adapts to the population as encountered. For example, when self-reported drug use is found among members of the sample, sampling effort may be increased in nearby areas. Types of adaptive sampling designs include ordinary sequential sampling, adaptive allocation in stratified sampling, adaptive cluster sampling, and optimal model-based designs. Graph sampling refers to situations with nodes (for example, people) connected by edges (such as social links or geographic proximity). An initial sample of nodes or edges is selected and edges are subsequently followed to bring other nodes into the sample. Graph sampling designs include network sampling, snowball sampling, link-tracing, chain referral, and adaptive cluster sampling. A graph sampling design is adaptive if the decision to include linked nodes depends on variables of interest observed on nodes already in the sample. Adjustment methods for nonsampling errors such as imperfect detection of drug users in the sample apply to adaptive as well as conventional designs.

  14. 41 CFR 60-3.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...

  15. 41 CFR 60-3.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...

  16. 41 CFR 60-3.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...

  17. 41 CFR 60-3.3 - Discrimination defined: Relationship between use of selection procedures and discrimination.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...

  18. Experimental design for evaluating WWTP data by linear mass balances.

    PubMed

    Le, Quan H; Verheijen, Peter J T; van Loosdrecht, Mark C M; Volcke, Eveline I P

    2018-05-15

    A stepwise experimental design procedure to obtain reliable data from wastewater treatment plants (WWTPs) was developed. The proposed procedure aims at determining sets of additional measurements (besides available ones) that guarantee the identifiability of key process variables, which means that their value can be calculated from other, measured variables, based on available constraints in the form of linear mass balances. Among all solutions, i.e. all possible sets of additional measurements allowing the identifiability of all key process variables, the optimal solutions were found taking into account two objectives, namely the accuracy of the identified key variables and the cost of additional measurements. The results of this multi-objective optimization problem were represented in a Pareto-optimal front. The presented procedure was applied to a full-scale WWTP. Detailed analysis of the relation between measurements allowed the determination of groups of overlapping mass balances. Adding measured variables could only serve in identifying key variables that appear in the same group of mass balances. Besides, the application of the experimental design procedure to these individual groups significantly reduced the computational effort in evaluating available measurements and planning additional monitoring campaigns. The proposed procedure is straightforward and can be applied to other WWTPs with or without prior data collection. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  20. Predictors of self-reported negative mood following a depressive mood induction procedure across previously depressed, currently anxious, and control individuals.

    PubMed

    Scherrer, Martin C; Dobson, Keith S; Quigley, Leanne

    2014-09-01

    This study identified and examined a set of potential predictors of self-reported negative mood following a depressive mood induction procedure (MIP) in a sample of previously depressed, clinically anxious, and control participants. The examined predictor variables were selected on the basis of previous research and theories of depression, and included symptoms of depression and anxiety, negative and positive affect, negative and positive automatic thoughts, dysfunctional beliefs, rumination, self-concept, and occurrence and perceived unpleasantness of recent negative events. The sample consisted of 33 previously depressed, 22 currently anxious, and 26 non-clinical control participants, recruited from community sources. Participant group status was confirmed through structured diagnostic interviews. Participants completed the Velten negative self-statement MIP as well as self-report questionnaires of affective, cognitive, and psychosocial variables selected as potential predictors of mood change. Symptoms of anxiety were associated with increased self-reported negative mood shift following the MIP in previously depressed participants, but not clinically anxious or control participants. Increased occurrence of recent negative events was a marginally significant predictor of negative mood shift for the previously depressed participants only. None of the other examined variables was significant predictors of MIP response for any of the participant groups. These results identify factors that may increase susceptibility to negative mood states in previously depressed individuals, with implications for theory and prevention of relapse to depression. The findings also identify a number of affective, cognitive, and psychosocial variables that do not appear to influence mood change following a depressive MIP in previously depressed, currently anxious, and control individuals. Limitations of the study and directions for future research are discussed. Current anxiety symptomatology was a significant predictor and occurrence of recent negative events was a marginally significant predictor of greater negative mood shift following the depressive mood induction for previously depressed individuals. None of the examined variables predicted change in mood following the depressive mood induction for currently anxious or control individuals. These results suggest that anxiety symptoms and experience with negative events may increase risk for experiencing depressive mood states among individuals with a vulnerability to depression. The generalizability of the present results to individuals with comorbid depression and anxiety is limited. Future research employing appropriate statistical approaches for confirmatory research is needed to test and confirm the present results. © 2014 The British Psychological Society.

  1. Automated Routines for Calculating Whole-Stream Metabolism: Theoretical Background and User's Guide

    USGS Publications Warehouse

    Bales, Jerad D.; Nardi, Mark R.

    2007-01-01

    In order to standardize methods and facilitate rapid calculation and archival of stream-metabolism variables, the Stream Metabolism Program was developed to calculate gross primary production, net ecosystem production, respiration, and selected other variables from continuous measurements of dissolved-oxygen concentration, water temperature, and other user-supplied information. Methods for calculating metabolism from continuous measurements of dissolved-oxygen concentration and water temperature are fairly well known, but a standard set of procedures and computation software for all aspects of the calculations were not available previously. The Stream Metabolism Program addresses this deficiency with a stand-alone executable computer program written in Visual Basic.NET?, which runs in the Microsoft Windows? environment. All equations and assumptions used in the development of the software are documented in this report. Detailed guidance on application of the software is presented, along with a summary of the data required to use the software. Data from either a single station or paired (upstream, downstream) stations can be used with the software to calculate metabolism variables.

  2. Group Comparisons in the Presence of Missing Data Using Latent Variable Modeling Techniques

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2010-01-01

    A latent variable modeling approach for examining population similarities and differences in observed variable relationship and mean indexes in incomplete data sets is discussed. The method is based on the full information maximum likelihood procedure of model fitting and parameter estimation. The procedure can be employed to test group identities…

  3. Accounting for temporal variation in soil hydrological properties when simulating surface runoff on tilled plots

    NASA Astrophysics Data System (ADS)

    Chahinian, Nanée; Moussa, Roger; Andrieux, Patrick; Voltz, Marc

    2006-07-01

    Tillage operations are known to greatly influence local overland flow, infiltration and depressional storage by altering soil hydraulic properties and soil surface roughness. The calibration of runoff models for tilled fields is not identical to that of untilled fields, as it has to take into consideration the temporal variability of parameters due to the transient nature of surface crusts. In this paper, we seek the application of a rainfall-runoff model and the development of a calibration methodology to take into account the impact of tillage on overland flow simulation at the scale of a tilled plot (3240 m 2) located in southern France. The selected model couples the (Morel-Seytoux, H.J., 1978. Derivation of equations for variable rainfall infiltration. Water Resources Research. 14(4), 561-568). Infiltration equation to a transfer function based on the diffusive wave equation. The parameters to be calibrated are the hydraulic conductivity at natural saturation Ks, the surface detention Sd and the lag time ω. A two-step calibration procedure is presented. First, eleven rainfall-runoff events are calibrated individually and the variability of the calibrated parameters are analysed. The individually calibrated Ks values decrease monotonously according to the total amount of rainfall since tillage. No clear relationship is observed between the two parameters Sd and ω, and the date of tillage. However, the lag time ω increases inversely with the peakflow of the events. Fairly good agreement is observed between the simulated and measured hydrographs of the calibration set. Simple mathematical laws describing the evolution of Ks and ω are selected, while Sd is considered constant. The second step involves the collective calibration of the law of evolution of each parameter on the whole calibration set. This procedure is calibrated on 11 events and validated on ten runoff inducing and four non-runoff inducing rainfall events. The suggested calibration methodology seems robust and can be transposed to other gauged sites.

  4. Evaluating measurement models in clinical research: covariance structure analysis of latent variable models of self-conception.

    PubMed

    Hoyle, R H

    1991-02-01

    Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.

  5. Effects of a cognitive dual task on variability and local dynamic stability in sustained repetitive arm movements using principal component analysis: a pilot study.

    PubMed

    Longo, Alessia; Federolf, Peter; Haid, Thomas; Meulenbroek, Ruud

    2018-06-01

    In many daily jobs, repetitive arm movements are performed for extended periods of time under continuous cognitive demands. Even highly monotonous tasks exhibit an inherent motor variability and subtle fluctuations in movement stability. Variability and stability are different aspects of system dynamics, whose magnitude may be further affected by a cognitive load. Thus, the aim of the study was to explore and compare the effects of a cognitive dual task on the variability and local dynamic stability in a repetitive bimanual task. Thirteen healthy volunteers performed the repetitive motor task with and without a concurrent cognitive task of counting aloud backwards in multiples of three. Upper-body 3D kinematics were collected and postural reconfigurations-the variability related to the volunteer's postural change-were determined through a principal component analysis-based procedure. Subsequently, the most salient component was selected for the analysis of (1) cycle-to-cycle spatial and temporal variability, and (2) local dynamic stability as reflected by the largest Lyapunov exponent. Finally, end-point variability was evaluated as a control measure. The dual cognitive task proved to increase the temporal variability and reduce the local dynamic stability, marginally decrease endpoint variability, and substantially lower the incidence of postural reconfigurations. Particularly, the latter effect is considered to be relevant for the prevention of work-related musculoskeletal disorders since reduced variability in sustained repetitive tasks might increase the risk of overuse injuries.

  6. Bayesian classification for the selection of in vitro human embryos using morphological and clinical data.

    PubMed

    Morales, Dinora Araceli; Bengoetxea, Endika; Larrañaga, Pedro; García, Miguel; Franco, Yosu; Fresnada, Mónica; Merino, Marisa

    2008-05-01

    In vitro fertilization (IVF) is a medically assisted reproduction technique that enables infertile couples to achieve successful pregnancy. Given the uncertainty of the treatment, we propose an intelligent decision support system based on supervised classification by Bayesian classifiers to aid to the selection of the most promising embryos that will form the batch to be transferred to the woman's uterus. The aim of the supervised classification system is to improve overall success rate of each IVF treatment in which a batch of embryos is transferred each time, where the success is achieved when implantation (i.e. pregnancy) is obtained. Due to ethical reasons, different legislative restrictions apply in every country on this technique. In Spain, legislation allows a maximum of three embryos to form each transfer batch. As a result, clinicians prefer to select the embryos by non-invasive embryo examination based on simple methods and observation focused on morphology and dynamics of embryo development after fertilization. This paper proposes the application of Bayesian classifiers to this embryo selection problem in order to provide a decision support system that allows a more accurate selection than with the actual procedures which fully rely on the expertise and experience of embryologists. For this, we propose to take into consideration a reduced subset of feature variables related to embryo morphology and clinical data of patients, and from this data to induce Bayesian classification models. Results obtained applying a filter technique to choose the subset of variables, and the performance of Bayesian classifiers using them, are presented.

  7. Application and testing of a procedure to evaluate transferability of habitat suitability criteria

    USGS Publications Warehouse

    Thomas, Jeff A.; Bovee, Ken D.

    1993-01-01

    A procedure designed to test the transferability of habitat suitability criteria was evaluated in the Cache la Poudre River, Colorado. Habitat suitability criteria were developed for active adult and juvenile rainbow trout in the South Platte River, Colorado. These criteria were tested by comparing microhabitat use predicted from the criteria with observed microhabitat use by adult rainbow trout in the Cache la Poudre River. A one-sided X2 test, using counts of occupied and unoccupied cells in each suitability classification, was used to test for non-random selection for optimum habitat use over usable habitat and for suitable over unsuitable habitat. Criteria for adult rainbow trout were judged to be transferable to the Cache la Poudre River, but juvenile criteria (applied to adults) were not transferable. Random subsampling of occupied and unoccupied cells was conducted to determine the effect of sample size on the reliability of the test procedure. The incidence of type I and type II errors increased rapidly as the sample size was reduced below 55 occupied and 200 unoccupied cells. Recommended modifications to the procedure included the adoption of a systematic or randomized sampling design and direct measurement of microhabitat variables. With these modifications, the procedure is economical, simple and reliable. Use of the procedure as a quality assurance device in routine applications of the instream flow incremental methodology was encouraged.

  8. Diagnostic procedures for non-small-cell lung cancer (NSCLC): recommendations of the European Expert Group

    PubMed Central

    Dietel, Manfred; Bubendorf, Lukas; Dingemans, Anne-Marie C; Dooms, Christophe; Elmberger, Göran; García, Rosa Calero; Kerr, Keith M; Lim, Eric; López-Ríos, Fernando; Thunnissen, Erik; Van Schil, Paul E; von Laffert, Maximilian

    2016-01-01

    Background There is currently no Europe-wide consensus on the appropriate preanalytical measures and workflow to optimise procedures for tissue-based molecular testing of non-small-cell lung cancer (NSCLC). To address this, a group of lung cancer experts (see list of authors) convened to discuss and propose standard operating procedures (SOPs) for NSCLC. Methods Based on earlier meetings and scientific expertise on lung cancer, a multidisciplinary group meeting was aligned. The aim was to include all relevant aspects concerning NSCLC diagnosis. After careful consideration, the following topics were selected and each was reviewed by the experts: surgical resection and sampling; biopsy procedures for analysis; preanalytical and other variables affecting quality of tissue; tissue conservation; testing procedures for epidermal growth factor receptor, anaplastic lymphoma kinase and ROS proto-oncogene 1, receptor tyrosine kinase (ROS1) in lung tissue and cytological specimens; as well as standardised reporting and quality control (QC). Finally, an optimal workflow was described. Results Suggested optimal procedures and workflows are discussed in detail. The broad consensus was that the complex workflow presented can only be executed effectively by an interdisciplinary approach using a well-trained team. Conclusions To optimise diagnosis and treatment of patients with NSCLC, it is essential to establish SOPs that are adaptable to the local situation. In addition, a continuous QC system and a local multidisciplinary tumour-type-oriented board are essential. PMID:26530085

  9. 29 CFR 1606.6 - Selection procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 4 2012-07-01 2012-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...

  10. 29 CFR 1606.6 - Selection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 4 2011-07-01 2011-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...

  11. 29 CFR 1606.6 - Selection procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 4 2014-07-01 2014-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...

  12. 29 CFR 1606.6 - Selection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...

  13. 29 CFR 1606.6 - Selection procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 4 2013-07-01 2013-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...

  14. 29 CFR 1607.18 - Citations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...

  15. 29 CFR 1607.18 - Citations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...

  16. 29 CFR 1607.18 - Citations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...

  17. 29 CFR 1607.18 - Citations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...

  18. 29 CFR 1607.18 - Citations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...

  19. Efficient Variable Selection Method for Exposure Variables on Binary Data

    NASA Astrophysics Data System (ADS)

    Ohno, Manabu; Tarumi, Tomoyuki

    In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.

  20. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  1. Simultaneous use of geological, geophysical, and LANDSAT digital data in uranium exploration. [Libya

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Missallati, A.; Prelat, A.E.; Lyon, R.J.P.

    1979-08-01

    The simultaneous use of geological, geophysical and Landsat data in uranium exploration in southern Libya is reported. The values of 43 geological, geophysical and digital data variables, including age and type of rock, geological contacts, aeroradio-metric and aeromagnetic values and brightness ratios, were used as input into a geomathematical model. Stepwise discriminant analysis was used to select grid cells most favorable for detailed mineral exploration and to evaluate the significance of each variable in discriminating between the anomalous (radioactive) and nonanomalous (nonradioactive) areas. It is found that the geological contact relationships, Landsat Bands 6 and Band 7/4 ratio values weremore » most useful in the discrimination. The procedure was found to be statistically and geologically reliable, and applicable to similar regions using only the most important geological and Landsat data.« less

  2. Using Concentration Curves to Assess Organization-Specific Relationships between Surgeon Volumes and Outcomes.

    PubMed

    Kanter, Michael H; Huang, Yii-Chieh; Kally, Zina; Gordon, Margo A; Meltzer, Charles

    2018-06-01

    A well-documented association exists between higher surgeon volumes and better outcomes for many procedures, but surgeons may be reluctant to change practice patterns without objective, credible, and near real-time data on their performance. In addition, published thresholds for procedure volumes may be biased or perceived as arbitrary; typical reports compare surgeons grouped into discrete procedure volume categories, even though the volume-outcomes relationship is likely continuous. The concentration curves methodology, which has been used to analyze whether health outcomes vary with socioeconomic status, was adapted to explore the association between procedure volume and outcomes as a continuous relationship so that data for all surgeons within a health care organization could be included. Using widely available software and requiring minimal analytic expertise, this approach plots cumulative percentages of two variables of interest against each other and assesses the characteristics of the resulting curve. Organization-specific relationships between surgeon volumes and outcomes were examined for three example types of procedures: uncomplicated hysterectomies, infant circumcisions, and total thyroidectomies. The concentration index was used to assess whether outcomes were equally distributed unrelated to volumes. For all three procedures, the concentration curve methodology identified associations between surgeon procedure volumes and selected outcomes that were specific to the organization. The concentration indices confirmed the higher prevalence of examined outcomes among low-volume surgeons. The curves supported organizational discussions about surgical quality. Concentration curves require minimal resources to identify organization- and procedure-specific relationships between surgeon procedure volumes and outcomes and can support quality improvement. Copyright © 2018 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  3. Major System Source Evaluation and Selection Procedures.

    DTIC Science & Technology

    1987-04-02

    A-RIBI I" MAJOR SYSTEM SOURCE EVALUATION AND SELECTION PROCEDURES / (U) BUSINESS MANAGEMENT RESEARCH ASSOCIATES INC ARLINGTON VA 02 APR 6? ORMC-5...BRMC-85-5142-1 0 I- MAJOR SYSTEM SOURCE EVALUATION AND SELECTION PROCEDURES o I Business Management Research Associates, Inc. 1911 Jefferson Davis...FORCE SOURCE EVALUATION AND SELECTI ON PROCEDURES Prepared by Business Management Research Associates, Inc., 1911 Jefferson Davis Highway, Arlington

  4. 41 CFR 60-3.18 - Citations.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...

  5. 41 CFR 60-3.18 - Citations.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...

  6. 41 CFR 60-3.18 - Citations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...

  7. 41 CFR 60-3.18 - Citations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...

  8. 41 CFR 60-3.18 - Citations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...

  9. Harvests from bone marrow donors who weigh less than their recipients are associated with a significantly increased probability of a suboptimal harvest yield.

    PubMed

    Anthias, Chloe; Billen, Annelies; Arkwright, Rebecca; Szydlo, Richard M; Madrigal, J Alejandro; Shaw, Bronwen E

    2016-05-01

    Previous studies have demonstrated the importance of bone marrow (BM) harvest yield in determining transplant outcomes, but little is known regarding donor and procedure variables associated with achievement of an optimal yield. We hypothesized that donor demographics and variables relating to the procedure were likely to impact the yield (total nucleated cells [TNCs]/kg recipient weight) and quality (TNCs/mL) of the harvest. To test our hypothesis, BM harvests of 110 consecutive unrelated donors were evaluated. The relationship between donor or procedure characteristics and the BM harvest yield was examined. The relationship between donor and recipient weight significantly influenced the harvest yield; only 14% of BM harvests from donors who weighed less than their recipient achieved a TNC count of more than 4 × 10(8) /kg compared to 56% of harvests from donors heavier than their recipient (p = 0.001). Higher-volume harvests were significantly less likely to achieve an optimal yield than lower-volume harvests (32% vs. 78%; p = 0.007), and higher-volume harvests contained significantly fewer TNCs/mL, indicating peripheral blood contamination. BM harvest quality also varied significantly between collection centers adding to recent concerns regarding maintenance of BM harvest expertise within the transplant community. Since the relationship between donor and recipient weight has a critical influence yield, we recommend prioritizing this secondary donor characteristic when selecting from multiple well-matched donors. Given the declining number of requests for BM harvests, it is crucial that systems are developed to train operators and ensure expertise in this procedure is retained. © 2016 AABB.

  10. Variation of bar-press duration: where do new responses come from?

    PubMed

    Roberts, Seth; Gharib, Afshin

    2006-06-01

    Instrumental learning involves both variation and selection: variation of what the animal does, and selection by reward from among the variation. Four experiments with rats suggested a rule about how variation is controlled by recent events. Experiment 1 used the peak procedure. Measurements of bar-press durations showed a sharp increase in mean duration after the time that food was sometimes given. The increase was triggered by the omission of expected food. Our first explanation of the increase was that it was a frustration effect. Experiment 2 tested this explanation with a procedure in which the first response of a trial usually produced food, ending the trial. In Experiment 2, unlike Experiment 1, omission of expected food did not produce a large increase in bar-press duration, which cast doubt on the frustration explanation. Experiments 3 and 4 tested an alternative explanation: a decrease in expectation of reward increases variation. Both used two signals associated with different probabilities of reward. Bar presses were more variable in duration during the signal with the lower probability of reward, supporting this alternative. These experiments show how variation can be studied with ordinary equipment and responses.

  11. Study of the motion artefacts of skin-mounted inertial sensors under different attachment conditions.

    PubMed

    Forner-Cordero, A; Mateu-Arce, M; Forner-Cordero, I; Alcántara, E; Moreno, J C; Pons, J L

    2008-04-01

    A common problem shared by accelerometers, inertial sensors and any motion measurement method based on skin-mounted sensors is the movement of the soft tissues covering the bones. The aim of this work is to propose a method for the validation of the attachment of skin-mounted sensors. A second-order (mass-spring-damper) model was proposed to characterize the behaviour of the soft tissue between the bone and the sensor. Three sets of experiments were performed. In the first one, different procedures to excite the system were evaluated to select an adequate excitation stimulus. In the second one, the selected stimulus was applied under varying attachment conditions while the third experiment was used to test the model. The heel drop was chosen as the excitation method because it showed lower variability and could discriminate between different attachment conditions. There was, in agreement with the model, a trend to increase the natural frequency of the system with decreasing accelerometer mass. An important result is the development of a standard procedure to test the bandwidth of skin-mounted inertial sensors, such as accelerometers mounted on the skin or markers heavier than a few grams.

  12. The University of Texas Houston Stroke Registry (UTHSR): implementation of enhanced data quality assurance procedures improves data quality

    PubMed Central

    2013-01-01

    Background Limited information has been published regarding standard quality assurance (QA) procedures for stroke registries. We share our experience regarding the establishment of enhanced QA procedures for the University of Texas Houston Stroke Registry (UTHSR) and evaluate whether these QA procedures have improved data quality in UTHSR. Methods All 5093 patient records that were abstracted and entered in UTHSR, between January 1, 2008 and December 31, 2011, were considered in this study. We conducted reliability and validity studies. For reliability and validity of data captured by abstractors, a random subset of 30 records was used for re-abstraction of select key variables by two abstractors. These 30 records were re-abstracted by a team of experts that included a vascular neurologist clinician as the “gold standard”. We assessed inter-rater reliability (IRR) between the two abstractors as well as validity of each abstractor with the “gold standard”. Depending on the scale of variables, IRR was assessed with Kappa or intra-class correlations (ICC) using a 2-way, random effects ANOVA. For assessment of validity of data in UTHSR we re-abstracted another set of 85 patient records for which all discrepant entries were adjudicated by a vascular neurology fellow clinician and added to the set of our “gold standard”. We assessed level of agreement between the registry data and the “gold standard” as well as sensitivity and specificity. We used logistic regression to compare error rates for different years to assess whether a significant improvement in data quality has been achieved during 2008–2011. Results The error rate dropped significantly, from 4.8% in 2008 to 2.2% in 2011 (P < 0.001). The two abstractors had an excellent IRR (Kappa or ICC ≥ 0.75) on almost all key variables checked. Agreement between data in UTHSR and the “gold standard” was excellent for almost all categorical and continuous variables. Conclusions Establishment of a rigorous data quality assurance for our UTHSR has helped to improve the validity of data. We observed an excellent IRR between the two abstractors. We recommend training of chart abstractors and systematic assessment of IRR between abstractors and validity of the abstracted data in stroke registries. PMID:23767957

  13. Personality change of officer cadets in the Canadian Forces.

    PubMed

    Bradley, J Peter; Nicol, Adelheid A M

    2003-12-01

    The present research assessed the extent to which 46 officer cadets' personalities changed as a result of spending four years in a military academic institution. Four personality variables were examined, Surgency, Achievement, Conscientiousness, and Internal Control. Given the nature of the military environment and training, we hypothesized that individuals' scores on these scales would increase with time. Analysis indicated scores on all four scales decreased. A confound occurred as in the first administration of the measure participants were completing the measure as part of a selection procedure whereas in the second one participants completed measures voluntarily.

  14. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  15. Operations planning simulation: Model study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  16. An analysis of life expectancy of airplane wings in normal cruising flight

    NASA Technical Reports Server (NTRS)

    Putnam, Abbott A

    1945-01-01

    In order to provide a basis for judging the relative importance of wing failure by fatigue and by single intense gusts, an analysis of wing life for normal cruising flight was made based on data on the frequency of atmospheric gusts. The independent variables considered in the analysis included stress-concentration factor, stress-load relation, wing loading, design and cruising speeds, design gust velocity, and airplane size. Several methods for estimating fatigue life from gust frequencies are discussed. The procedure selected for the analysis is believed to be simple and reasonably accurate, though slightly conservative.

  17. Technology assessment--who is getting stuck, anyway?

    PubMed

    Bayne, C G

    1997-10-01

    Some 13% to 62% of all injuries reported to hospital occupational health workers are traceable to phlebotomy procedures. However, the selection of a needleless system is complex. The informed manager seeks answers to the following questions: (1) Do needleless systems reduce the risk of seroconversion to bloodborne pathogens? (Answer yes.) (2) Does the use of a needleless system affect patients' risk of catheter sepsis? (Answer no.) and (3) What about chemical compatibility with the newer materials used in needleless systems? (New variables require more studies.) The author lists references, manufacturers and some of the chemicals to which some manufacturers have exposed their devices.

  18. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  19. Stability indicating methods for the analysis of cefprozil in the presence of its alkaline induced degradation product

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-04-01

    Three simple, specific, accurate and precise spectrophotometric methods were developed for the determination of cefprozil (CZ) in the presence of its alkaline induced degradation product (DCZ). The first method was the bivariate method, while the two other multivariate methods were partial least squares (PLS) and spectral residual augmented classical least squares (SRACLS). The multivariate methods were applied with and without variable selection procedure (genetic algorithm GA). These methods were tested by analyzing laboratory prepared mixtures of the above drug with its alkaline induced degradation product and they were applied to its commercial pharmaceutical products.

  20. Boosting for detection of gene-environment interactions.

    PubMed

    Pashova, H; LeBlanc, M; Kooperberg, C

    2013-01-30

    In genetic association studies, it is typically thought that genetic variants and environmental variables jointly will explain more of the inheritance of a phenotype than either of these two components separately. Traditional methods to identify gene-environment interactions typically consider only one measured environmental variable at a time. However, in practice, multiple environmental factors may each be imprecise surrogates for the underlying physiological process that actually interacts with the genetic factors. In this paper, we develop a variant of L(2) boosting that is specifically designed to identify combinations of environmental variables that jointly modify the effect of a gene on a phenotype. Because the effect modifiers might have a small signal compared with the main effects, working in a space that is orthogonal to the main predictors allows us to focus on the interaction space. In a simulation study that investigates some plausible underlying model assumptions, our method outperforms the least absolute shrinkage and selection and Akaike Information Criterion and Bayesian Information Criterion model selection procedures as having the lowest test error. In an example for the Women's Health Initiative-Population Architecture using Genomics and Epidemiology study, the dedicated boosting method was able to pick out two single-nucleotide polymorphisms for which effect modification appears present. The performance was evaluated on an independent test set, and the results are promising. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Factors explaining children's responses to intravenous needle insertions.

    PubMed

    McCarthy, Ann Marie; Kleiber, Charmaine; Hanrahan, Kirsten; Zimmerman, M Bridget; Westhus, Nina; Allen, Susan

    2010-01-01

    Previous research shows that numerous child, parent, and procedural variables affect children's distress responses to procedures. Cognitive-behavioral interventions such as distraction are effective in reducing pain and distress for many children undergoing these procedures. The purpose of this report was to examine child, parent, and procedural variables that explain child distress during a scheduled intravenous insertion when parents are distraction coaches for their children. A total of 542 children, between 4 and 10 years of age, and their parents participated. Child age, gender, diagnosis, and ethnicity were measured by questions developed for this study. Standardized instruments were used to measure child experience with procedures, temperament, ability to attend, anxiety, coping style, and pain sensitivity. Questions were developed to measure parent variables, including ethnicity, gender, previous experiences, and expectations, and procedural variables, including use of topical anesthetics and difficulty of procedure. Standardized instruments were used to measure parenting style and parent anxiety, whereas a new instrument was developed to measure parent performance of distraction. Children's distress responses were measured with the Observation Scale of Behavioral Distress-Revised (behavioral), salivary cortisol (biological), Oucher Pain Scale (self-report), and parent report of child distress (parent report). Regression methods were used for data analyses. Variables explaining behavioral, child-report and parent-report measures include child age, typical coping response, and parent expectation of distress (p < .01). Level of parents' distraction coaching explained a significant portion of behavioral, biological, and parent-report distress measures (p < .05). Child impulsivity and special assistance at school also significantly explained child self-report of pain (p < .05). Additional variables explaining cortisol response were child's distress in the morning before clinic, diagnoses of attention deficit hyperactivity disorder or anxiety disorder, and timing of preparation for the clinic visit. The findings can be used to identify children at risk for high distress during procedures. This is the first study to find a relationship between child behavioral distress and level of parent distraction coaching.

  2. Environmental risk assessment of water quality in harbor areas: a new methodology applied to European ports.

    PubMed

    Gómez, Aina G; Ondiviela, Bárbara; Puente, Araceli; Juanes, José A

    2015-05-15

    This work presents a standard and unified procedure for assessment of environmental risks at the contaminant source level in port aquatic systems. Using this method, port managers and local authorities will be able to hierarchically classify environmental hazards and proceed with the most suitable management actions. This procedure combines rigorously selected parameters and indicators to estimate the environmental risk of each contaminant source based on its probability, consequences and vulnerability. The spatio-temporal variability of multiple stressors (agents) and receptors (endpoints) is taken into account to provide accurate estimations for application of precisely defined measures. The developed methodology is tested on a wide range of different scenarios via application in six European ports. The validation process confirms its usefulness, versatility and adaptability as a management tool for port water quality in Europe and worldwide. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. A scoping review of assessment tools for laparoscopic suturing.

    PubMed

    Bilgic, Elif; Endo, Satoshi; Lebedeva, Ekaterina; Takao, Madoka; McKendy, Katherine M; Watanabe, Yusuke; Feldman, Liane S; Vassiliou, Melina C

    2018-05-03

    A needs assessment identified a gap in teaching and assessment of laparoscopic suturing (LS) skills. The purpose of this review is to identify assessment tools that were used to assess LS skills, to evaluate validity evidence available, and to provide guidance for selecting the right assessment tool for specific assessment conditions. Bibliographic databases were searched till April 2017. Full-text articles were included if they reported on assessment tools used in the operating room/simulation to (1) assess procedures that require LS or (2) specifically assess LS skills. Forty-two tools were identified, of which 26 were used for assessing LS skills specifically and 26 for procedures that require LS. Tools had the most evidence in internal structure and relationship to other variables, and least in consequences. Through identification and evaluation of assessment tools, the results of this review could be used as a guideline when implementing assessment tools into training programs.

  4. A Direct Screening Procedure for Gravitropism Mutants in Arabidopsis thaliana (L.) Heynh. 1

    PubMed Central

    Bullen, Bertha L.; Best, Thérèse R.; Gregg, Mary M.; Barsel, Sara-Ellen; Poff, Kenneth L.

    1990-01-01

    In order to isolate gravitropism mutants of Arabidopsis thaliana (L.) Heynh. var Estland for the genetic dissection of the gravitropism pathway, a direct screening procedure has been developed in which mutants are selected on the basis of their gravitropic response. Variability in hypocotyl curvature was dependent on the germination time of each seed stock, resulting in the incorrect identification of several lines as gravitropism mutants when a standard protocol for the potentiation of germination was used. When the protocol was adjusted to allow for differences in germination time, these lines were eliminated from the collection. Out of the 60,000 M2 seedlings screened, 0.3 to 0.4% exhibited altered gravitropism. In approximately 40% of these mutant lines, only gravitropism by the root or the hypocotyl was altered, while the response of the other organ was unaffected. These data support the hypothesis that root and hypocotyl gravitropism are genetically separable. PMID:11537704

  5. A direct screening procedure for gravitropism mutants in Arabidopsis thaliana (L.) Heynh

    NASA Technical Reports Server (NTRS)

    Bullen, B. L.; Best, T. R.; Gregg, M. M.; Poff, K. L.; Barsel, S-E (Principal Investigator)

    1990-01-01

    In order to isolate gravitropism mutants of Arabidopsis thaliana (L.) Heynh. var Estland for the genetic dissection of the gravitropism pathway, a direct screening procedure has been developed in which mutants are selected on the basis of their gravitropic response. Variability in hypocotyl curvature was dependent on the germination time of each seed stock, resulting in the incorrect identification of several lines as gravitropism mutants when a standard protocol for the potentiation of germination was used. When the protocol was adjusted to allow for differences in germination time, these lines were eliminated from the collection. Out of the 60,000 M2 seedlings screened, 0.3 to 0.4% exhibited altered gravitropism. In approximately 40% of these mutant lines, only gravitropism by the root or the hypocotyl was altered, while the response of the other organ was unaffected. These data support the hypothesis that root and hypocotyl gravitropism are genetically separable.

  6. Sleep-deprivation effect on human performance: a meta-analysis approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candice D. Griffith; Candice D. Griffith; Sankaran Mahadevan

    Human fatigue is hard to define since there is no direct measure of fatigue, much like stress. Instead fatigue must be inferred from measures that are affected by fatigue. One such measurable output affected by fatigue is reaction time. In this study the relationship of reaction time to sleep deprivation is studied. These variables were selected because reaction time and hours of sleep deprivation are straightforward characteristics of fatigue to begin the investigation of fatigue effects on performance. Meta-analysis, a widely used procedure in medical and psychological studies, is applied to the variety of fatigue literature collected from various fieldsmore » in this study. Meta-analysis establishes a procedure for coding and analyzing information from various studies to compute an effect size. In this research the effect size reported is the difference between standardized means, and is found to be -0.6341, implying a strong relationship between sleep deprivation and performance degradation.« less

  7. An application of robust ridge regression model in the presence of outliers to real data problem

    NASA Astrophysics Data System (ADS)

    Shariff, N. S. Md.; Ferdaos, N. A.

    2017-09-01

    Multicollinearity and outliers are often leads to inconsistent and unreliable parameter estimates in regression analysis. The well-known procedure that is robust to multicollinearity problem is the ridge regression method. This method however is believed are affected by the presence of outlier. The combination of GM-estimation and ridge parameter that is robust towards both problems is on interest in this study. As such, both techniques are employed to investigate the relationship between stock market price and macroeconomic variables in Malaysia due to curiosity of involving the multicollinearity and outlier problem in the data set. There are four macroeconomic factors selected for this study which are Consumer Price Index (CPI), Gross Domestic Product (GDP), Base Lending Rate (BLR) and Money Supply (M1). The results demonstrate that the proposed procedure is able to produce reliable results towards the presence of multicollinearity and outliers in the real data.

  8. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  9. Matched-pair analyses of resting and dynamic morphology between Monarc and TVT-O procedures by ultrasound.

    PubMed

    Yang, Jenn-Ming; Yang, Shwu-Huey; Huang, Wen-Chen; Tzeng, Chii-Ruey

    2013-07-01

    To determine morphologic differences between Monarc and TVT-O procedures in axial and coronal planes by three- and four-dimensional (3D and 4D) ultrasound. Retrospective chart audits and ultrasound analyses were conducted on 128 women who had undergone either Monarc or TVT-O procedures for urodynamic stress incontinence. Thirty matched pairs of the two successful procedures were randomly selected and compared. Matched variables were age, parity, body mass index, cesarean status, menopausal status, and primary surgeries. Six-month postoperative 3D and 4D ultrasound results obtained at rest, on straining, and during coughing in these 60 women were analyzed. Assessed ultrasound parameters included the axial tape urethral distance (aTUD), axial central urethral echolucent area (aUCEA), axial tape angle (aTA), and coronal tape angle (cTA), all of which were measured at three equidistant points along the tapes. Paired t-tests were used to compare differences in ultrasound parameters between women after the two procedures and a P value <0.004 was considered significant after Bonferroni correction. At rest, women subjected to Monarc procedures had a significantly wider aTA at one-fourth of the tape and a wider cTA at one-, two-, and three-fourths of the tape than did those subjected to TVT-O procedures. There were no significant differences in other resting ultrasound parameters between these two procedures. Additionally, after both procedures women had comparable straining and coughing ultrasound manifestations as well as respective dynamic changes. Despite flatter resting tape angulations in women following Monarc procedures, both Monarc and TVT-O tapes had equivalent dynamic patterns and changes assessed by 4D ultrasound. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. 48 CFR 6.102 - Use of competitive procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... procedure (see subpart 36.6 for procedures). (2) Competitive selection of basic and applied research and... nature identifying areas of research interest, including criteria for selecting proposals, and soliciting...

  11. Parameterizing sorption isotherms using a hybrid global-local fitting procedure.

    PubMed

    Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J

    2017-05-01

    Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Regional variability in fecal microbiota transplantation practices: a survey of the Southern Ontario Fecal Microbiota Transplantation Movement.

    PubMed

    Hota, Susy S; Surangiwala, Salman; Paterson, Aimee S; Coburn, Bryan; Poutanen, Susan M

    2018-04-18

    There is growing evidence that fecal microbiota transplantation (FMT) is an effective treatment for recurrent Clostridium difficile infection, but little guidance exists for implementation of FMT programs. The objective of this study is to describe the program characteristics and protocols of 9 planned or operating FMT programs in the Southern Ontario Fecal Microbiota Transplantation (SOFT) Movement, to help guide future FMT program implementation. A 59-item survey was administered electronically to clinical leads of the SOFT Movement on June 2, 2016. The survey evaluated 7 domains: FMT program characteristics, FMT recipients, donor screening/selection, transplant manufacturing, FMT administration, good manufacturing procedures/biosafety procedures and infection-control procedures. We used descriptive statistics to analyze quantitative data. All 9 programs responded to the survey: 6 were active, 1 had FMT standard operating procedures developed but did not have clinical experience, and 2 were in the process of forming FMT programs. All 6 active programs performed FMT in adult patients with C. difficile infection. About 1300 FMT procedures were performed between 2003 and 2016. Five of the 6 operating programs administered the preparation via enema. Programs were driven primarily by physicians. All programs used universal FMT donors and followed Health Canada's screening guidelines, with considerable variability in screening frequency (every 3-6 mo) and modality. Locations for transplant preparation and manufacturing protocols varied across programs. Stool mass for FMT ranged from 20 g to 150 g, and transplant volume ranged from 25 mL to 300 mL. The experience of this high-volume regional FMT network highlights current challenges in FMT program development, including a high reliance on physicians and the costly nature of donor screening. Standardization and optimization through development of regional centres of excellence for FMT donor recruitment and administration should be explored. Copyright 2018, Joule Inc. or its licensors.

  13. Renal biopsy practice: What is the gold standard?

    PubMed

    Brachemi, Soumeya; Bollée, Guillaume

    2014-11-06

    Renal biopsy (RB) is useful for diagnosis and therapy guidance of renal diseases but incurs a risk of bleeding complications of variable severity, from transitory haematuria or asymptomatic hematoma to life-threatening hemorrhage. Several risk factors for complications after RB have been identified, including high blood pressure, age, decreased renal function, obesity, anemia, low platelet count and hemostasis disorders. These should be carefully assessed and, whenever possible, corrected before the procedure. The incidence of serious complications has become low with the use of automated biopsy devices and ultrasound guidance, which is currently the "gold standard" procedure for percutaneous RB. An outpatient biopsy may be considered in a carefully selected population with no risk factor for bleeding. However, controversies persist on the duration of observation after biopsy, especially for native kidney biopsy. Transjugular RB and laparoscopic RB represent reliable alternatives to conventional percutaneous biopsy in patients at high risk of bleeding, although some factors limit their use. This aim of this review is to summarize the issues of complications after RB, assessment of hemorrhagic risk factors, optimal biopsy procedure and strategies aimed to minimize the risk of bleeding.

  14. Building a Computer Program to Support Children, Parents, and Distraction during Healthcare Procedures

    PubMed Central

    McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W. Nick; Zimmerman, M. Bridget; Ersig, Anne L.

    2012-01-01

    This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children’s responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, the Children, Parents and Distraction (CPaD), is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure. PMID:22805121

  15. Improving the Spatial Prediction of Soil Organic Carbon Stocks in a Complex Tropical Mountain Landscape by Methodological Specifications in Machine Learning Approaches

    PubMed Central

    Schmidt, Johannes; Glaser, Bruno

    2016-01-01

    Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736

  16. Selection of Drought Tolerant Maize Hybrids Using Path Coefficient Analysis and Selection Index.

    PubMed

    Dao, Abdalla; Sanou, Jacob; V S Traore, Edgar; Gracen, Vernon; Danquah, Eric Y

    2017-01-01

    In drought-prone environments, direct selection for yield is not adequate because of the variable environment and genotype x environment interaction. Therefore, the use of secondary traits in addition to yield has been suggested. The relative usefulness of secondary traits as indirect selection criteria for maize grain yield is determined by the magnitudes of their genetic variance, heritability and genetic correlation with the grain yield. Forty eight testcross hybrids derived from lines with different genetic background and geographical origins plus 7 checks were evaluated in both well-watered and water-stressed conditions over two years for grain yield and secondary traits to determine the most appropriate secondary traits and select drought tolerant hybrids. Study found that broad-sense heritability of grain yield and Ear Per Plant (EPP) increased under drought stress. Ear aspect (EASP) and ear height (EHT) had larger correlation coefficients and direct effect on grain yield but in opposite direction, negative and positive respectively. Traits like, EPP, Tassel Size (TS) and Plant Recovery (PR) contributed to increase yield via EASP by a large negative indirect effect. Under drought stress, EHT had positive and high direct effect and negative indirect effect via plant height on grain yield indicating that the ratio between ear and plant heights (R-EPH) was associated to grain yield. Path coefficient analysis showed that traits EPP, TS, PR, EASP, R-EPH were important secondary traits in the present experiment. These traits were used in a selection index to classify hybrids according to their performance under drought. The selection procedure included also a Relative Decrease in Yield (RDY) index. Some secondary traits reported as significant selection criteria for selection under drought stress were not finally established in the present study. This is because the relationship between grain and secondary traits can be affected by various factors including germplasm, environment and applied statistical analysis. Therefore, different traits and selection procedure should be applied in the selection process of drought tolerant genotypes for diverse genetic materials and growing conditions.

  17. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  18. 48 CFR 36.301 - Use of two-phase design-build selection procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Use of two-phase design-build selection procedures. 36.301 Section 36.301 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Two-Phase Design-Build Selection Procedures 36.301...

  19. 9 CFR 592.450 - Procedures for selecting appeal samples.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...

  20. 9 CFR 592.450 - Procedures for selecting appeal samples.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...

  1. 9 CFR 592.450 - Procedures for selecting appeal samples.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...

  2. 9 CFR 592.450 - Procedures for selecting appeal samples.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...

  3. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    PubMed

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  4. Statistical summary of selected physical, chemical, and toxicity characteristics and estimates of annual constituent loads in urban stormwater, Maricopa County, Arizona

    USGS Publications Warehouse

    Fossum, Kenneth D.; O'Day, Christie M.; Wilson, Barbara J.; Monical, Jim E.

    2001-01-01

    Stormwater and streamflow in Maricopa County were monitored to (1) describe the physical, chemical, and toxicity characteristics of stormwater from areas having different land uses, (2) describe the physical, chemical, and toxicity characteristics of streamflow from areas that receive urban stormwater, and (3) estimate constituent loads in stormwater. Urban stormwater and streamflow had similar ranges in most constituent concentrations. The mean concentration of dissolved solids in urban stormwater was lower than in streamflow from the Salt River and Indian Bend Wash. Urban stormwater, however, had a greater chemical oxygen demand and higher concentrations of most nutrients. Mean seasonal loads and mean annual loads of 11 constituents and volumes of runoff were estimated for municipalities in the metropolitan Phoenix area, Arizona, by adjusting regional regression equations of loads. This adjustment procedure uses the original regional regression equation and additional explanatory variables that were not included in the original equation. The adjusted equations had standard errors that ranged from 161 to 196 percent. The large standard errors of the prediction result from the large variability of the constituent concentration data used in the regression analysis. Adjustment procedures produced unsatisfactory results for nine of the regressions?suspended solids, dissolved solids, total phosphorus, dissolved phosphorus, total recoverable cadmium, total recoverable copper, total recoverable lead, total recoverable zinc, and storm runoff. These equations had no consistent direction of bias and no other additional explanatory variables correlated with the observed loads. A stepwise-multiple regression or a three-variable regression (total storm rainfall, drainage area, and impervious area) and local data were used to develop local regression equations for these nine constituents. These equations had standard errors from 15 to 183 percent.

  5. Within-patient temporal variance in MELD score and impact on survival prediction after TIPS creation.

    PubMed

    Gaba, Ron C; Shah, Kruti D; Couture, Patrick M; Parvinian, Ahmad; Minocha, Jeet; Knuttinen, M Grace; Bui, James T

    2013-01-01

    To assess within-patient temporal variability in Model for End Stage Liver Disease (MELD) scores and impact on outcome prognostication after transjugular intrahepatic portosystemic shunt (TIPS) creation. In this single institution retrospective study, MELD score was calculated in 68 patients (M:F = 42:26, mean age 55 years) at 4 pre-procedure time points (1, 2-6, 7-14, and 15-35 days) before TIPS creation. Medical record review was used to identify 30- and 90-day clinical outcomes. Within-patient variability in pre-procedure MELD scores was assessed using repeated measures analysis of variance, and the ability of MELD scores at different time points to predict post-TIPS mortality was evaluated by comparing area under receiver operating characteristic (AUROC) curves. TIPS were successfully created for ascites (n = 30), variceal hemorrhage (n = 29), hepatic hydrothorax (n = 8), and portal vein thrombosis (n = 1). Pre-TIPS MELD scores showed significant (P = 0.032) within-subject variance that approached ± 18.5%. Higher MELD scores demonstrated greater variability in sequential scores as compared to lower MELD scores. Overall 30- and 90-day patient mortality was 22% (15/67) and 38% (24/64). AUROC curves showed that most recent MELD scores performed on the day of TIPS had superior predictive capacity for 30- (0.876, P = 0.037) and 90-day (0.805 P = 0.020) mortality compared to MELD scores performed 2-6 or 7-14 days prior. In conclusion, MELD scores show within-patient variability over time, and scores calculated on the day of TIPS most accurately predict risk and should be used for patient selection and counseling.

  6. Descriptor selection for banana accessions based on univariate and multivariate analysis.

    PubMed

    Brandão, L P; Souza, C P F; Pereira, V M; Silva, S O; Santos-Serejo, J A; Ledo, C A S; Amorim, E P

    2013-05-14

    Our objective was to establish a minimum number of morphological descriptors for the characterization of banana germplasm and evaluate the efficiency of removal of redundant characters, based on univariate and multivariate statistical analyses. Phenotypic characterization was made of 77 accessions from Bahia, Brazil, using 92 descriptors. The selection of the descriptors was carried out by principal components analysis (quantitative) and by entropy (multi-category). Efficiency of elimination was analyzed by a comparative study between the clusters formed, taking into consideration all 92 descriptors and smaller groups. The selected descriptors were analyzed with the Ward-MLM procedure and a combined matrix formed by the Gower algorithm. We were able to reduce the number of descriptors used for characterizing the banana germplasm (42%). The correlation between the matrices considering the 92 descriptors and the selected ones was 0.82, showing that the reduction in the number of descriptors did not influence estimation of genetic variability between the banana accessions. We conclude that removing these descriptors caused no loss of information, considering the groups formed from pre-established criteria, including subgroup/subspecies.

  7. [Biases in the study of prognostic factors].

    PubMed

    Delgado-Rodríguez, M

    1999-01-01

    The main objective is to detail the main biases in the study of prognostic factors. Confounding bias is illustrated with social class, a prognostic factor still discussed. Within selection bias several cases are commented: response bias, specially frequent when the patients of a clinical trial are used; the shortcomings in the formation of an inception cohort; the fallacy of Neyman (bias due to the duration of disease) when the study begins with a cross-sectional study; the selection bias in the treatment of survivors for the different treatment opportunity of those living longer; the bias due to the inclusion of heterogeneous diagnostic groups; and the selection bias due to differential information losses and the use of statistical multivariate procedures. Within the biases during follow-up, an empiric rule to value the impact of the number of losses is given. In information bias the Will Rogers' phenomenon and the usefulness of clinical databases are discussed. Lastly, a recommendation against the use of cutoff points yielded by bivariate analyses to select the variable to be included in multivariate analysis is given.

  8. Integrating Genetic, Neuropsychological and Neuroimaging Data to Model Early-Onset Obsessive Compulsive Disorder Severity

    PubMed Central

    Mas, Sergi; Gassó, Patricia; Morer, Astrid; Calvo, Anna; Bargalló, Nuria; Lafuente, Amalia; Lázaro, Luisa

    2016-01-01

    We propose an integrative approach that combines structural magnetic resonance imaging data (MRI), diffusion tensor imaging data (DTI), neuropsychological data, and genetic data to predict early-onset obsessive compulsive disorder (OCD) severity. From a cohort of 87 patients, 56 with complete information were used in the present analysis. First, we performed a multivariate genetic association analysis of OCD severity with 266 genetic polymorphisms. This association analysis was used to select and prioritize the SNPs that would be included in the model. Second, we split the sample into a training set (N = 38) and a validation set (N = 18). Third, entropy-based measures of information gain were used for feature selection with the training subset. Fourth, the selected features were fed into two supervised methods of class prediction based on machine learning, using the leave-one-out procedure with the training set. Finally, the resulting model was validated with the validation set. Nine variables were used for the creation of the OCD severity predictor, including six genetic polymorphisms and three variables from the neuropsychological data. The developed model classified child and adolescent patients with OCD by disease severity with an accuracy of 0.90 in the testing set and 0.70 in the validation sample. Above its clinical applicability, the combination of particular neuropsychological, neuroimaging, and genetic characteristics could enhance our understanding of the neurobiological basis of the disorder. PMID:27093171

  9. Variability of indication criteria in knee and hip replacement: an observational study

    PubMed Central

    2010-01-01

    Background Total knee (TKR) and hip (THR) replacement (arthroplasty) are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. Methods The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form) and the specific WOMAC (Western Ontario and Mcmaster Universities) scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated. The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD), mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95), variation coefficient (CV5-95) and weighted variation coefficient (WCV5-95) for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. Results The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients). However, a moderate-high variability was detected in subjective-objective signs. Among all the surgeries performed, approximately a quarter of them could be considered to be inappropriate. Conclusions A greater inter-hospital variability was observed for objective than for subjective signs for both procedures, suggesting that the differences in clinical criteria followed by surgeons when indicating arthroplasty are the main responsible factors for the variation in surgery rates. PMID:20977745

  10. Variability of indication criteria in knee and hip replacement: an observational study.

    PubMed

    Cobos, Raquel; Latorre, Amaia; Aizpuru, Felipe; Guenaga, Jose I; Sarasqueta, Cristina; Escobar, Antonio; García, Lidia; Herrera-Espiñeira, Carmen

    2010-10-26

    Total knee (TKR) and hip (THR) replacement (arthroplasty) are effective surgical procedures that relieve pain, improve patients' quality of life and increase functional capacity. Studies on variations in medical practice usually place the indications for performing these procedures to be highly variable, because surgeons appear to follow different criteria when recommending surgery in patients with different severity levels. We therefore proposed a study to evaluate inter-hospital variability in arthroplasty indication. The pre-surgical condition of 1603 patients included was compared by their personal characteristics, clinical situation and self-perceived health status. Patients were asked to complete two health-related quality of life questionnaires: the generic SF-12 (Short Form) and the specific WOMAC (Western Ontario and Mcmaster Universities) scale. The type of patient undergoing primary arthroplasty was similar in the 15 different hospitals evaluated.The variability in baseline WOMAC score between hospitals in THR and TKR indication was described by range, mean and standard deviation (SD), mean and standard deviation weighted by the number of procedures at each hospital, high/low ratio or extremal quotient (EQ5-95), variation coefficient (CV5-95) and weighted variation coefficient (WCV5-95) for 5-95 percentile range. The variability in subjective and objective signs was evaluated using median, range and WCV5-95. The appropriateness of the procedures performed was calculated using a specific threshold proposed by Quintana et al for assessing pain and functional capacity. The variability expressed as WCV5-95 was very low, between 0.05 and 0.11 for all three dimensions on WOMAC scale for both types of procedure in all participating hospitals. The variability in the physical and mental SF-12 components was very low for both types of procedure (0.08 and 0.07 for hip and 0.03 and 0.07 for knee surgery patients). However, a moderate-high variability was detected in subjective-objective signs. Among all the surgeries performed, approximately a quarter of them could be considered to be inappropriate. A greater inter-hospital variability was observed for objective than for subjective signs for both procedures, suggesting that the differences in clinical criteria followed by surgeons when indicating arthroplasty are the main responsible factors for the variation in surgery rates.

  11. Independence screening for high dimensional nonlinear additive ODE models with applications to dynamic gene regulatory networks.

    PubMed

    Xue, Hongqi; Wu, Shuang; Wu, Yichao; Ramirez Idarraga, Juan C; Wu, Hulin

    2018-05-02

    Mechanism-driven low-dimensional ordinary differential equation (ODE) models are often used to model viral dynamics at cellular levels and epidemics of infectious diseases. However, low-dimensional mechanism-based ODE models are limited for modeling infectious diseases at molecular levels such as transcriptomic or proteomic levels, which is critical to understand pathogenesis of diseases. Although linear ODE models have been proposed for gene regulatory networks (GRNs), nonlinear regulations are common in GRNs. The reconstruction of large-scale nonlinear networks from time-course gene expression data remains an unresolved issue. Here, we use high-dimensional nonlinear additive ODEs to model GRNs and propose a 4-step procedure to efficiently perform variable selection for nonlinear ODEs. To tackle the challenge of high dimensionality, we couple the 2-stage smoothing-based estimation method for ODEs and a nonlinear independence screening method to perform variable selection for the nonlinear ODE models. We have shown that our method possesses the sure screening property and it can handle problems with non-polynomial dimensionality. Numerical performance of the proposed method is illustrated with simulated data and a real data example for identifying the dynamic GRN of Saccharomyces cerevisiae. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Review of the tactical evaluation tools for youth players, assessing the tactics in team sports: football.

    PubMed

    González-Víllora, Sixto; Serra-Olivares, Jaime; Pastor-Vicedo, Juan Carlos; da Costa, Israel Teoldo

    2015-01-01

    For sports assessment to be comprehensive, it must address all variables of sports development, such as psychological, social-emotional, physical and physiological, technical and tactical. Tactical assessment has been a neglected variable until the 1980s or 1990s. In the last two decades (1995-2015), the evolution of tactical assessment has grown considerably, given its importance in game performance. The aim of this paper is to compile and analyze different tactical measuring tools in team sports, particularly in soccer, through a bibliographical review. Six tools have been selected on five different criteria: (1) Instruments which assess tactics, (2) The studies have an evolution approach related to the tactical principles, (3) With a valid and reliable method, (4) The existence of publications mentioning the tool in the method, v. Applicable in different sports contexts. All six tools are structured around seven headings: introduction, objective(s), tactical principles, materials, procedures, instructions/rules of the game and published studies. In conclusion, the teaching-learning processes more tactical oriented have useful tactical assessment instrument in the literature. The selection of one or another depends some context information, like age and level of expertise of the players.

  13. Isolation and characterization of anti ROR1 single chain fragment variable antibodies using phage display technique.

    PubMed

    Aghebati-Maleki, Leili; Younesi, Vahid; Jadidi-Niaragh, Farhad; Baradaran, Behzad; Majidi, Jafar; Yousefi, Mehdi

    2017-01-01

    Receptor tyrosine kinase-like orphan receptor (ROR1) belongs to one of the families of receptor tyrosine kinases (RTKs). RTKs are involved in the various physiologic cellular functions including proliferation, migration, survival, signaling and differentiation. Several RTKs are deregulated in various cancers implying the targeting potential of these molecules in cancer therapy. ROR1 has recently been shown to be expressed in various types of cancer cells but not in normal adult cells. Hence a molecular inhibitor of extracellular domain of ROR1 that inhibits ROR1-cell surface interaction is of great therapeutic importance. In an attempt to develop molecular inhibitors of ROR1, we screened single chain variable fragment (scFv) phage display libraries, Tomlinson I + J, against one specific synthetic oligopeptide from extracellular domain of ROR1 and selected scFvs were characterized using various immunological techniques. Several ROR1 specific scFvs were selected following five rounds of panning procedure. The scFvs showed specific binding to ROR1 using immunological techniques. Our results demonstrate successful isolation and characterization of specific ROR1 scFvs that may have great therapeutic potential in cancer immunotherapy.

  14. Emotional Experience Improves With Age: Evidence Based on Over 10 Years of Experience Sampling

    PubMed Central

    Carstensen, Laura L.; Turan, Bulent; Scheibe, Susanne; Ram, Nilam; Ersner-Hershfield, Hal; Samanez-Larkin, Gregory R.; Brooks, Kathryn P.; Nesselroade, John R.

    2012-01-01

    Recent evidence suggests that emotional well-being improves from early adulthood to old age. This study used experience-sampling to examine the developmental course of emotional experience in a representative sample of adults spanning early to very late adulthood. Participants (N = 184, Wave 1; N = 191, Wave 2; N = 178, Wave 3) reported their emotional states at five randomly selected times each day for a one week period. Using a measurement burst design, the one-week sampling procedure was repeated five and then ten years later. Cross-sectional and growth curve analyses indicate that aging is associated with more positive overall emotional well-being, with greater emotional stability and with more complexity (as evidenced by greater co-occurrence of positive and negative emotions). These findings remained robust after accounting for other variables that may be related to emotional experience (personality, verbal fluency, physical health, and demographic variables). Finally, emotional experience predicted mortality; controlling for age, sex, and ethnicity, individuals who experienced relatively more positive than negative emotions in everyday life were more likely to have survived over a 13 year period. Findings are discussed in the theoretical context of socioemotional selectivity theory. PMID:20973600

  15. Towards the development of laboratory methods for studying drinking games: Initial findings, methodological considerations, and future directions

    PubMed Central

    Silvestri, Mark M.; Lewis, Jennifer M.; Borsari, Brian; Correia, Christopher J.

    2014-01-01

    Background Drinking games are prevalent among college students and are associated with increased alcohol use and negative alcohol-related consequences. There has been substantial growth in research on drinking games. However, the majority of published studies rely on retrospective self-reports of behavior and very few studies have made use of laboratory procedures to systematically observe drinking game behavior. Objectives The current paper draws on the authors’ experiences designing and implementing methods for the study of drinking games in the laboratory. Results The paper addressed the following key design features: (a) drinking game selection; (b) beverage selection; (c) standardizing game play; (d) selection of dependent and independent variables; and (e) creating a realistic drinking game environment. Conclusions The goal of this methodological review paper is to encourage other researchers to pursue laboratory research on drinking game behavior. Use of laboratory-based methodologies will facilitate a better understanding of the dynamics of risky drinking and inform prevention and intervention efforts. PMID:25192209

  16. Stratification and sample selection for multicrop experiments. [Arkansas, Kentucky, Michigan, Missouri, Mississippi, Ohio, Wisconsin, Illinois, Indiana, Minnesota, Iowa, Louisiana, Nebraska, South Dakota, and North Dakota

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Hixson, M. M.; Davis, B. J.; Bauer, M. E.

    1978-01-01

    The author has identified the following significant results. A stratification was performed and sample segments were selected for an initial investigation of multicrop problems in order to support development and evaluation of procedures for using LACIE and other technologies for the classification of corn and soybeans, to identify factors likely to affect classification performance, and to evaluate problems encountered and techniques which are applicable to the crop estimation problem in foreign countries. Two types of samples, low density and high density, supporting these requirements were selected as research data set for an initial evaluation of technical issues. Looking at the geographic location of the strata, the system appears to be logical and the various segments seem to represent different conditions. This result is supportive not only of the variables and the methodology employed in the stratification, but also of the validity of the data sets employed.

  17. Pareto genealogies arising from a Poisson branching evolution model with selection.

    PubMed

    Huillet, Thierry E

    2014-02-01

    We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.

  18. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS.

    PubMed

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-01-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Identification of solid state fermentation degree with FT-NIR spectroscopy: Comparison of wavelength variable selection methods of CARS and SCARS

    NASA Astrophysics Data System (ADS)

    Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai

    2015-10-01

    The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.

  20. Predicting emergency coronary artery bypass graft following PCI: application of a computational model to refer patients to hospitals with and without onsite surgical backup

    PubMed Central

    Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S

    2015-01-01

    Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738

  1. Screening procedure for airborne pollutants emitted from a high-tech industrial complex in Taiwan.

    PubMed

    Wang, John H C; Tsai, Ching-Tsan; Chiang, Chow-Feng

    2015-11-01

    Despite the modernization of computational techniques, atmospheric dispersion modeling remains a complicated task as it involves the use of large amounts of interrelated data with wide variability. The continuously growing list of regulated air pollutants also increases the difficulty of this task. To address these challenges, this study aimed to develop a screening procedure for a long-term exposure scenario by generating a site-specific lookup table of hourly averaged dispersion factors (χ/Q), which could be evaluated by downwind distance, direction, and effective plume height only. To allow for such simplification, the average plume rise was weighted with the frequency distribution of meteorological data so that the prediction of χ/Q could be decoupled from the meteorological data. To illustrate this procedure, 20 receptors around a high-tech complex in Taiwan were selected. Five consecutive years of hourly meteorological data were acquired to generate a lookup table of χ/Q, as well as two regression formulas of plume rise as functions of downwind distance, buoyancy flux, and stack height. To calculate the concentrations for the selected receptors, a six-step Excel algorithm was programmed with four years of emission records and 10 most critical toxics were screened out. A validation check using Industrial Source Complex (ISC3) model with the same meteorological and emission data showed an acceptable overestimate of 6.7% in the average concentration of 10 nearby receptors. The procedure proposed in this study allows practical and focused emission management for a large industrial complex and can therefore be integrated into an air quality decision-making system. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Generalized structural equations improve sexual-selection analyses

    PubMed Central

    Santini, Giacomo; Marchetti, Giovanni Maria; Focardi, Stefano

    2017-01-01

    Sexual selection is an intense evolutionary force, which operates through competition for the access to breeding resources. There are many cases where male copulatory success is highly asymmetric, and few males are able to sire most females. Two main hypotheses were proposed to explain this asymmetry: “female choice” and “male dominance”. The literature reports contrasting results. This variability may reflect actual differences among studied populations, but it may also be generated by methodological differences and statistical shortcomings in data analysis. A review of the statistical methods used so far in lek studies, shows a prevalence of Linear Models (LM) and Generalized Linear Models (GLM) which may be affected by problems in inferring cause-effect relationships; multi-collinearity among explanatory variables and erroneous handling of non-normal and non-continuous distributions of the response variable. In lek breeding, selective pressure is maximal, because large numbers of males and females congregate in small arenas. We used a dataset on lekking fallow deer (Dama dama), to contrast the methods and procedures employed so far, and we propose a novel approach based on Generalized Structural Equations Models (GSEMs). GSEMs combine the power and flexibility of both SEM and GLM in a unified modeling framework. We showed that LMs fail to identify several important predictors of male copulatory success and yields very imprecise parameter estimates. Minor variations in data transformation yield wide changes in results and the method appears unreliable. GLMs improved the analysis, but GSEMs provided better results, because the use of latent variables decreases the impact of measurement errors. Using GSEMs, we were able to test contrasting hypotheses and calculate both direct and indirect effects, and we reached a high precision of the estimates, which implies a high predictive ability. In synthesis, we recommend the use of GSEMs in studies on lekking behaviour, and we provide guidelines to implement these models. PMID:28809923

  3. Effects of Sarin on the Operant Behavior of Guinea Pigs

    DTIC Science & Technology

    2005-07-19

    a after behavioral sessions had ended. The first collection time modified autoshaping procedure (concurrent variable-time was after the final saline...after behavioral sessions had ended. The first collection time modified autoshaping procedure (concurrent variable-time was after the final saline

  4. 23 CFR 636.202 - When are two-phase design-build selection procedures appropriate?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 23 Highways 1 2011-04-01 2011-04-01 false When are two-phase design-build selection procedures appropriate? 636.202 Section 636.202 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS DESIGN-BUILD CONTRACTING Selection Procedures, Award Criteria § 636.202 When are two-phase design-build...

  5. Trends and variability in the hydrological regime of the Mackenzie River Basin

    NASA Astrophysics Data System (ADS)

    Abdul Aziz, Omar I.; Burn, Donald H.

    2006-03-01

    Trends and variability in the hydrological regime were analyzed for the Mackenzie River Basin in northern Canada. The procedure utilized the Mann-Kendall non-parametric test to detect trends, the Trend Free Pre-Whitening (TFPW) approach for correcting time-series data for autocorrelation and a bootstrap resampling method to account for the cross-correlation structure of the data. A total of 19 hydrological and six meteorological variables were selected for the study. Analysis was conducted on hydrological data from a network of 54 hydrometric stations and meteorological data from a network of 10 stations. The results indicated that several hydrological variables exhibit a greater number of significant trends than are expected to occur by chance. Noteworthy were strong increasing trends over the winter month flows of December to April as well as in the annual minimum flow and weak decreasing trends in the early summer and late fall flows as well as in the annual mean flow. An earlier onset of the spring freshet is noted over the basin. The results are expected to assist water resources managers and policy makers in making better planning decisions in the Mackenzie River Basin.

  6. A respiratory alert model for the Shenandoah Valley, Virginia, USA

    NASA Astrophysics Data System (ADS)

    Hondula, David M.; Davis, Robert E.; Knight, David B.; Sitka, Luke J.; Enfield, Kyle; Gawtry, Stephen B.; Stenger, Phillip J.; Deaton, Michael L.; Normile, Caroline P.; Lee, Temple R.

    2013-01-01

    Respiratory morbidity (particularly COPD and asthma) can be influenced by short-term weather fluctuations that affect air quality and lung function. We developed a model to evaluate meteorological conditions associated with respiratory hospital admissions in the Shenandoah Valley of Virginia, USA. We generated ensembles of classification trees based on six years of respiratory-related hospital admissions (64,620 cases) and a suite of 83 potential environmental predictor variables. As our goal was to identify short-term weather linkages to high admission periods, the dependent variable was formulated as a binary classification of five-day moving average respiratory admission departures from the seasonal mean value. Accounting for seasonality removed the long-term apparent inverse relationship between temperature and admissions. We generated eight total models specific to the northern and southern portions of the valley for each season. All eight models demonstrate predictive skill (mean odds ratio = 3.635) when evaluated using a randomization procedure. The predictor variables selected by the ensembling algorithm vary across models, and both meteorological and air quality variables are included. In general, the models indicate complex linkages between respiratory health and environmental conditions that may be difficult to identify using more traditional approaches.

  7. Design, Characterization, and Optimization of Controlled Drug Delivery System Containing Antibiotic Drug/s

    PubMed Central

    Shelate, Pragna; Dave, Divyang

    2016-01-01

    The objective of this work was design, characterization, and optimization of controlled drug delivery system containing antibiotic drug/s. Osmotic drug delivery system was chosen as controlled drug delivery system. The porous osmotic pump tablets were designed using Plackett-Burman and Box-Behnken factorial design to find out the best formulation. For screening of three categories of polymers, six independent variables were chosen for Plackett-Burman design. Osmotic agent sodium chloride and microcrystalline cellulose, pore forming agent sodium lauryl sulphate and sucrose, and coating agent ethyl cellulose and cellulose acetate were chosen as independent variables. Optimization of osmotic tablets was done by Box-Behnken design by selecting three independent variables. Osmotic agent sodium chloride, pore forming agent sodium lauryl sulphate, and coating agent cellulose acetate were chosen as independent variables. The result of Plackett-Burman and Box-Behnken design and ANOVA studies revealed that osmotic agent and pore former had significant effect on the drug release up to 12 hr. The observed independent variables were found to be very close to predicted values of most satisfactory formulation which demonstrates the feasibility of the optimization procedure in successful development of porous osmotic pump tablets containing antibiotic drug/s by using sodium chloride, sodium lauryl sulphate, and cellulose acetate as key excipients. PMID:27610247

  8. Geographical Gradients in Argentinean Terrestrial Mammal Species Richness and Their Environmental Correlates

    PubMed Central

    Márquez, Ana L.; Real, Raimundo; Kin, Marta S.; Guerrero, José Carlos; Galván, Betina; Barbosa, A. Márcia; Olivero, Jesús; Palomo, L. Javier; Vargas, J. Mario; Justo, Enrique

    2012-01-01

    We analysed the main geographical trends of terrestrial mammal species richness (SR) in Argentina, assessing how broad-scale environmental variation (defined by climatic and topographic variables) and the spatial form of the country (defined by spatial filters based on spatial eigenvector mapping (SEVM)) influence the kinds and the numbers of mammal species along these geographical trends. We also evaluated if there are pure geographical trends not accounted for by the environmental or spatial factors. The environmental variables and spatial filters that simultaneously correlated with the geographical variables and SR were considered potential causes of the geographic trends. We performed partial correlations between SR and the geographical variables, maintaining the selected explanatory variables statistically constant, to determine if SR was fully explained by them or if a significant residual geographic pattern remained. All groups and subgroups presented a latitudinal gradient not attributable to the spatial form of the country. Most of these trends were not explained by climate. We used a variation partitioning procedure to quantify the pure geographic trend (PGT) that remained unaccounted for. The PGT was larger for latitudinal than for longitudinal gradients. This suggests that historical or purely geographical causes may also be relevant drivers of these geographical gradients in mammal diversity. PMID:23028254

  9. Personality Constellations of Adolescents with Histories of Traumatic Parental Separations

    PubMed Central

    Malone, Johanna C.; Westen, Drew; Levendosky, Alytia A.

    2014-01-01

    Consistent with attachment theory and a developmental psychopathology framework, a growing body of research suggests that traumatic parental separations may lead to unique pathways of personality adaptation and maladaptation. The present study both examined personality characteristics and identified personality subtypes of adolescents with histories of traumatic separations. Randomly selected psychologists and psychiatrists provided data on 236 adolescents with histories of traumatic separations using a personality pathology instrument designed for use by clinically experienced observers, the Shedler-Westen Assessment Procedure (SWAP-II-A). Using a Q factor analysis, five distinct personality subtypes were identified: internalizing/avoidant, psychopathic, resilient, impulsive dysregulated, and immature dysregulated. Initial support for the validity of the subtypes was established based on Axis I and Axis II pathology, adaptive functioning, developmental history, and family history variables. The personality subtypes demonstrated substantial incremental validity in predicting adaptive functioning, above and beyond demographic variables and histories of other traumatic experiences. PMID:24647212

  10. Female Sex Is a Risk Factor for Failure of Hip Arthroscopy Performed for Acetabular Retroversion

    PubMed Central

    Poehling-Monaghan, Kirsten L.; Krych, Aaron J.; Levy, Bruce A.; Trousdale, Robert T.; Sierra, Rafael J.

    2017-01-01

    Background: The success of hip surgery in treating acetabular retroversion depends on the severity of the structural deformity and on selecting the correct patient for open or arthroscopic procedures. Purpose: To compare a group of patients with retroverted hips treated successfully with hip arthroscopy with a group of patients with retroverted hips that failed arthroscopic surgery, with special emphasis on (1) patient characteristics, (2) perioperative radiographic parameters, (3) intraoperative findings and concomitant procedures, and (4) patient sex. Study Design: Case-control study; Level of evidence, 3. Methods: We retrospectively reviewed the charts of 47 adult patients (47 hips) with acetabular retroversion who had undergone hip arthroscopy. Retroversion was based on the presence of an ischial spine sign in addition to either a crossover or posterior wall sign on a well-positioned anteroposterior pelvic radiograph. A total of 24 hips (50%) (16 females, 8 males; mean patient age, 31 years) had failed arthroscopy, defined as modified Harris Hip Score (mHHS) <80 or need for subsequent procedure. Twenty-three hips (8 females, 15 males; mean patient age, 29 years) were considered successful, defined as having no subsequent procedures and an mHHS >80 at the time of most recent follow-up. Perioperative variables, radiographic characteristics, and intraoperative findings were compared between the groups, in addition to a subgroup analysis based on sex. Results: The mean follow-up for successful hips was 30 months (SD, 11 months), with a mean mHHS of 95. In the failure group, 6 patients required subsequent procedures (4 anteverting periacetabular osteotomies and 2 total hip arthroplasties). The mean overall time to failure was 21 months, and the mean time to a second procedure was 24 months (total hip arthroplasty, 29.5 months; periacetabular osteotomy, 21.2 months); 18 hips failed on the basis of a low mHHS (mean, 65; range, 27-79) at last follow-up. Factors significantly different between the success and failure groups included patient sex, with males being more likely than females to have a successful outcome (P < .02), as well as undergoing femoral osteoplasty (P < .02). Intraoperative variables that were associated with worse outcome included isolated labral debridement (P < .002). In a subgroup analysis, males were more likely than their female counterparts to have a successful outcome with both isolated cam and combined cam-pincer resection (P < .05). Level of crossover correction on postoperative radiographs had no correlation with outcome. Conclusion: Acetabular retroversion remains a challenging pathoanatomy to treat arthroscopically. If hip arthroscopy is to be considered in select cases, we recommend labral preservation when possible. Male patients with correction of cam deformities did well, while females with significant retroversion appeared to be at greater risk for failure of arthroscopic treatment. PMID:29164164

  11. [Variations among Spanish regions in the use of three cardiovascular technologies].

    PubMed

    Fitch-Warner, Kathryn; García de Yébenes, María J; Lázaro y de Mercado, Pablo; Belaza-Santurde, Javier

    2006-12-01

    There is evidence that some geographic variations in the use of medical technologies are not explained by differences in disease burden. The objectives of this study were to quantify variability in the use of percutaneous coronary intervention (PCI), implantable cardioverter-defibrillators (ICDs), and cardiac resynchronization therapy (CRT) in Spanish autonomous regions and to try to explain the variability found for the first two technologies. Linear regression models were developed in which the number of procedures performed per million population (pmp) in 2003 in each autonomous region was the dependent variable. Independent variables used included indices of technology provision, regional wealth, and disease burden. For PCI, the mean utilization rate for the whole of Spain was 1038 procedures pmp, with a high-low ratio of 1.95. Differences in gross domestic product explained 21% of the variability, but there was no relationship between the number of procedures performed and disease burden. For ICDs, the mean number of procedures performed in the whole of Spain was 46 pmp, with a high-low ratio of 3.04. As for PCI, differences in regional wealth explained 40% of the variability, with disease burden making no contribution. For CRT, the mean number of procedures performed in Spain in 2003 was 15 pmp, with a high-low ratio of 15.7. The considerable regional variation that exists in the use of these three medical technologies is principally explained by differences in regional wealth and not in disease burden.

  12. Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?

    PubMed Central

    Delhey, Kaspar; Peters, Anne

    2008-01-01

    Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766

  13. ENSO detection and use to inform the operation of large scale water systems

    NASA Astrophysics Data System (ADS)

    Pham, Vuong; Giuliani, Matteo; Castelletti, Andrea

    2016-04-01

    El Nino Southern Oscillation (ENSO) is a large-scale, coupled ocean-atmosphere phenomenon occurring in the tropical Pacific Ocean, and is considered one of the most significant factors causing hydro-climatic anomalies throughout the world. Water systems operations could benefit from a better understanding of this global phenomenon, which has the potential for enhancing the accuracy and lead-time of long-range streamflow predictions. In turn, these are key to design interannual water transfers in large scale water systems to contrast increasingly frequent extremes induced by changing climate. Despite the ENSO teleconnection is well defined in some locations such as Western USA and Australia, there is no consensus on how it can be detected and used in other river basins, particularly in Europe, Africa, and Asia. In this work, we contribute a general framework relying on Input Variable Selection techniques for detecting ENSO teleconnection and using this information for improving water reservoir operations. Core of our procedure is the Iterative Input variable Selection (IIS) algorithm, which is employed to find the most relevant determinants of streamflow variability for deriving predictive models based on the selected inputs as well as to find the most valuable information for conditioning operating decisions. Our framework is applied to the multipurpose operations of the Hoa Binh reservoir in the Red River basin (Vietnam), taking into account hydropower production, water supply for irrigation, and flood mitigation during the monsoon season. Numerical results show that our framework is able to quantify the relationship between the ENSO fluctuations and the Red River basin hydrology. Moreover, we demonstrate that such ENSO teleconnection represents valuable information for improving the operations of Hoa Binh reservoir.

  14. Detailed Analysis of Peri-Procedural Strokes in Patients Undergoing Intracranial Stenting in SAMMPRIS

    PubMed Central

    Fiorella, David; Derdeyn, Colin P; Lynn, Michael J; Barnwell, Stanley L; Hoh, Brian L.; Levy, Elad I.; Harrigan, Mark R.; Klucznik, Richard P.; McDougall, Cameron G.; Pride, G. Lee; Zaidat, Osama O.; Lutsep, Helmi L.; Waters, Michael F.; Hourihane, J. Maurice; Alexandrov, Andrei V.; Chiu, David; Clark, Joni M.; Johnson, Mark D.; Torbey, Michel T.; Rumboldt, Zoran; Cloft, Harry J.; Turan, Tanya N.; Lane, Bethany F.; Janis, L. Scott; Chimowitz, Marc I.

    2012-01-01

    Background and Purpose Enrollment in the SAMMPRIS trial was halted due to the high risk of stroke or death within 30 days of enrollment in the percutaneous transluminal angioplasty and stenting (PTAS) arm relative to the medical arm. This analysis focuses on the patient and procedural factors that may have been associated with peri-procedural cerebrovascular events in the trial. Methods Bivariate and multivariate analyses were performed to evaluate whether patient and procedural variables were associated with cerebral ischemic or hemorrhagic events occurring within 30 days of enrollment (termed peri-procedural) in the PTAS arm. Results Of 224 patients randomized to PTAS, 213 underwent angioplasty alone (n=5) or with stenting (n=208). Of these, 13 had hemorrhagic strokes (7 parenchymal, 6 subarachnoid), 19 had ischemic stroke, and 2 had cerebral infarcts with temporary signs (CITS) within the peri-procedural period. Ischemic events were categorized as perforator occlusions (13), embolic (4), mixed perforator and embolic (2), and delayed stent occlusion (2). Multivariate analyses showed that higher percent stenosis, lower modified Rankin score, and clopidogrel load associated with an activated clotting time above the target range were associated (p ≤ 0.05) with hemorrhagic stroke. Non-smoking, basilar artery stenosis, diabetes, and older age were associated (p ≤ 0.05) with ischemic events. Conclusions Peri-procedural strokes in SAMMPRIS had multiple causes with the most common being perforator occlusion. Although risk factors for peri-procedural strokes could be identified, excluding patients with these features from undergoing PTAS to lower the procedural risk would limit PTAS to a small subset of patients. Moreover, given the small number of events, the present data should be used for hypothesis generation rather than to guide patient selection in clinical practice. PMID:22984008

  15. The Mediating and Moderating Effects of Workplace Social Capital on the Associations between Adverse Work Characteristics and Psychological Distress among Japanese Workers

    PubMed Central

    OSHIO, Takashi; INOUE, Akiomi; TSUTSUMI, Akizumi

    2014-01-01

    Our current study investigated how workplace social capital (WSC) mediates and moderates the associations between adverse work characteristics and psychological distress among Japanese workers. We collected cross-sectional data (N=9,350) from a baseline survey of an occupational Japanese cohort study. We focused on individual WSC and considered job demands/control, effort/reward, and two types (i.e., procedural and interactional) of organizational justice as work-characteristic variables. We defined psychological distress as a score of ≥5 on the Kessler Psychological Distress Scale (K6 scale). Multivariate logistic regression analyses predicted a binary variable of psychological distress by individual WSC and adverse work characteristics, adjusting for individual-level covariates. Individual WSC mediated the associations between adverse work characteristics and psychological distress in almost all model specifications. Additionally, individual WSC moderated the associations of psychological distress with high job demands, high effort, and low interactional justice when we used a high WSC cutoff point. In contrast, individual WSC did not moderate such interactions with low job control, reward, or procedural justice. We concluded that individual WSC mediated the associations between adverse work characteristics and psychological distress among Japanese workers while selectively moderating their associations at high levels of WSC. PMID:24705803

  16. Factors associated with no dental treatment in preschoolers with toothache: a cross-sectional study in outpatient public emergency services.

    PubMed

    Machado, Geovanna C M; Daher, Anelise; Costa, Luciane R

    2014-08-08

    Many parents rely on emergency services to deal with their children's dental problems, mostly pain and infection associated with dental caries. This cross-sectional study analyzed the factors associated with not doing an oral procedure in preschoolers with toothache attending public dental emergency services. Data were obtained from the clinical files of preschoolers treated at all nine dental emergency centers in Goiania, Brazil, in 2011. Data were children's age and sex, involved teeth, oral procedures, radiography request, medications prescribed and referrals. A total of 531 files of children under 6 years old with toothache out of 1,108 examined were selected. Children's mean age was 4.1 (SD 1.0) years (range 1-5 years) and 51.6% were girls. No oral procedures were performed in 49.2% of cases; in the other 50.8%, most of the oral procedures reported were endodontic intervention and temporary restorations. Primary molars were involved in 48.4% of cases. With the exception of "sex", the independent variables tested in the regression analysis significantly associated with non-performance of oral procedures: age (OR 0.7; 95% CI 0.5-0.8), radiography request (OR 3.8; 95% CI 1.7-8.2), medication prescribed (OR 7.5; 95% CI 4.9-11.5) and patient referred to another service (OR 5.7; 3.0-10.9). Many children with toothache received no oral procedure for pain relief.

  17. Stabilometric parameters are affected by anthropometry and foot placement.

    PubMed

    Chiari, Lorenzo; Rocchi, Laura; Cappello, Angelo

    2002-01-01

    To recognize and quantify the influence of biomechanical factors, namely anthropometry and foot placement, on the more common measures of stabilometric performance, including new-generation stochastic parameters. Fifty normal-bodied young adults were selected in order to cover a sufficiently wide range of anthropometric properties. They were allowed to choose their preferred side-by-side foot position and their quiet stance was recorded with eyes open and closed by a force platform. biomechanical factors are known to influence postural stability but their impact on stabilometric parameters has not been extensively explored yet. Principal component analysis was used for feature selection among several biomechanical factors. A collection of 55 stabilometric parameters from the literature was estimated from the center-of-pressure time series. Linear relations between stabilometric parameters and selected biomechanical factors were investigated by robust regression techniques. The feature selection process returned height, weight, maximum foot width, base-of-support area, and foot opening angle as the relevant biomechanical variables. Only eleven out of the 55 stabilometric parameters were completely immune from a linear dependence on these variables. The remaining parameters showed a moderate to high dependence that was strengthened upon eye closure. For these parameters, a normalization procedure was proposed, to remove what can well be considered, in clinical investigations, a spurious source of between-subject variability. Care should be taken when quantifying postural sway through stabilometric parameters. It is suggested as a good practice to include some anthropometric measurements in the experimental protocol, and to standardize or trace foot position. Although the role of anthropometry and foot placement has been investigated in specific studies, there are no studies in the literature that systematically explore the relationship between such BF and stabilometric parameters. This knowledge may contribute to better defining the experimental protocol and improving the functional evaluation of postural sway for clinical purposes, e.g. by removing through normalization the spurious effects of body properties and foot position on postural performance.

  18. Resolving the Conflict Between Associative Overdominance and Background Selection

    PubMed Central

    Zhao, Lei; Charlesworth, Brian

    2016-01-01

    In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952

  19. The influence of drought on flow‐ecology relationships in Ozark Highland streams

    USGS Publications Warehouse

    Lynch, Dustin T.; Leasure, D. R.; Magoulick, Daniel D.

    2018-01-01

    Drought and summer drying can have strong effects on abiotic and biotic components of stream ecosystems. Environmental flow‐ecology relationships may be affected by drought and drying, adding further uncertainty to the already complex interaction of flow with other environmental variables, including geomorphology and water quality.Environment–ecology relationships in stream communities in Ozark Highland streams, USA, were examined over two years with contrasting environmental conditions, a drought year (2012) and a flood year (2013). We analysed fish, crayfish and benthic macroinvertebrate assemblages using two different approaches: (1) a multiple regression analysis incorporating predictor variables related to habitat, water quality, geomorphology and hydrology and (2) a canonical ordination procedure using only hydrologic variables in which forward selection was used to select predictors that were most related to our response variables.Reach‐scale habitat quality and geomorphology were found to be the most important influences on community structure, but hydrology was also important, particularly during the flood year. We also found substantial between‐year variation in environment–ecology relationships. Some ecological responses differed significantly between drought and flood years, while others remained consistent. We found that magnitude was the most important flow component overall, but that there was a shift in relative importance from low flow metrics during the drought year to average flow metrics during the flood year, and the specific metrics of importance varied markedly between assemblages and years.Findings suggest that understanding temporal variation in flow‐ecology relationships may be crucial for resource planning. While some relationships show temporal variation, others are consistent between years. Additionally, different kinds of hydrologic variables can differ greatly in terms of which assemblages they affect and how they affect them. Managers can address this complexity by focusing on relationships that are temporally stable and flow metrics that are consistently important across groups, such as flood frequency and flow variability.

  20. 41 CFR 60-3.6 - Use of selection procedures which have not been validated.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) General Principles § 60-3.6 Use of selection procedures which have not been validated. A. Use of alternate... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of selection procedures which have not been validated. 60-3.6 Section 60-3.6 Public Contracts and Property Management...

  1. 5 CFR 720.206 - Selection guidelines.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Selection guidelines. 720.206 Section 720... guidelines. This subpart sets forth requirements for a recruitment program, not a selection program... procedures and criteria must be consistent with the Uniform Guidelines on Employee Selection Procedures (43...

  2. Selective Heart, Brain and Body Perfusion in Open Aortic Arch Replacement.

    PubMed

    Maier, Sven; Kari, Fabian; Rylski, Bartosz; Siepe, Matthias; Benk, Christoph; Beyersdorf, Friedhelm

    2016-09-01

    Open aortic arch replacement is a complex and challenging procedure, especially in post dissection aneurysms and in redo procedures after previous surgery of the ascending aorta or aortic root. We report our experience with the simultaneous selective perfusion of heart, brain, and remaining body to ensure optimal perfusion and to minimize perfusion-related risks during these procedures. We used a specially configured heart-lung machine with a centrifugal pump as arterial pump and an additional roller pump for the selective cerebral perfusion. Initial arterial cannulation is achieved via femoral artery or right axillary artery. After lower body circulatory arrest and selective antegrade cerebral perfusion for the distal arch anastomosis, we started selective lower body perfusion simultaneously to the selective antegrade cerebral perfusion and heart perfusion. Eighteen patients were successfully treated with this perfusion strategy from October 2012 to November 2015. No complications related to the heart-lung machine and the cannulation occurred during the procedures. Mean cardiopulmonary bypass time was 239 ± 33 minutes, the simultaneous selective perfusion of brain, heart, and remaining body lasted 55 ± 23 minutes. One patient suffered temporary neurological deficit that resolved completely during intensive care unit stay. No patient experienced a permanent neurological deficit or end-organ dysfunction. These high-risk procedures require a concept with a special setup of the heart-lung machine. Our perfusion strategy for aortic arch replacement ensures a selective perfusion of heart, brain, and lower body during this complex procedure and we observed excellent outcomes in this small series. This perfusion strategy is also applicable for redo procedures.

  3. Genetic potential of common bean progenies obtained by different breeding methods evaluated in various environments.

    PubMed

    Pontes Júnior, V A; Melo, P G S; Pereira, H S; Melo, L C

    2016-09-02

    Grain yield is strongly influenced by the environment, has polygenic and complex inheritance, and is a key trait in the selection and recommendation of cultivars. Breeding programs should efficiently explore the genetic variability resulting from crosses by selecting the most appropriate method for breeding in segregating populations. The goal of this study was to evaluate and compare the genetic potential of common bean progenies of carioca grain for grain yield, obtained by different breeding methods and evaluated in different environments. Progenies originating from crosses between lines and CNFC 7812 and CNFC 7829 were replanted up to the F 7 generation using three breeding methods in segregating populations: population (bulk), bulk within F 2 progenies, and single-seed descent (SSD). Fifteen F 8 progenies per method, two controls (BRS Estilo and Perola), and the parents were evaluated in a 7 x 7 simple lattice design, with plots of two 4-m rows. The tests were conducted in 10 environments in four States of Brazil and in three growing seasons in 2009 and 2010. Genetic parameters including genetic variance, heritability, variance of interaction, and expected selection gain were estimated. Genetic variability among progenies and the effect of progeny-environment interactions were determined for the three methods. The breeding methods differed significantly due to the effects of sampling procedures on the progenies and due to natural selection, which mainly affected the bulk method. The SSD and bulk methods provided populations with better estimates of genetic parameters and more stable progenies that were less affected by interaction with the environment.

  4. The Successive Projections Algorithm for interval selection in trilinear partial least-squares with residual bilinearization.

    PubMed

    Gomes, Adriano de Araújo; Alcaraz, Mirta Raquel; Goicoechea, Hector C; Araújo, Mario Cesar U

    2014-02-06

    In this work the Successive Projection Algorithm is presented for intervals selection in N-PLS for three-way data modeling. The proposed algorithm combines noise-reduction properties of PLS with the possibility of discarding uninformative variables in SPA. In addition, second-order advantage can be achieved by the residual bilinearization (RBL) procedure when an unexpected constituent is present in a test sample. For this purpose, SPA was modified in order to select intervals for use in trilinear PLS. The ability of the proposed algorithm, namely iSPA-N-PLS, was evaluated on one simulated and two experimental data sets, comparing the results to those obtained by N-PLS. In the simulated system, two analytes were quantitated in two test sets, with and without unexpected constituent. In the first experimental system, the determination of the four fluorophores (l-phenylalanine; l-3,4-dihydroxyphenylalanine; 1,4-dihydroxybenzene and l-tryptophan) was conducted with excitation-emission data matrices. In the second experimental system, quantitation of ofloxacin was performed in water samples containing two other uncalibrated quinolones (ciprofloxacin and danofloxacin) by high performance liquid chromatography with UV-vis diode array detector. For comparison purpose, a GA algorithm coupled with N-PLS/RBL was also used in this work. In most of the studied cases iSPA-N-PLS proved to be a promising tool for selection of variables in second-order calibration, generating models with smaller RMSEP, when compared to both the global model using all of the sensors in two dimensions and GA-NPLS/RBL. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.

    PubMed

    Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas

    2015-02-01

    The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  6. Is bioresorbable vascular scaffold acute recoil affected by baseline renal function and scaffold selection?

    PubMed

    Gunes, Haci Murat; Yılmaz, Filiz Kizilirmak; Gokdeniz, Tayyar; Demir, Gultekin Gunhan; Guler, Ekrem; Guler, Gamze Babur; Karaca, Oğuz; Cakal, Beytullah; İbişoğlu, Ersin; Boztosun, Bilal

    2016-12-01

    The aim of the present study was to investigate the relationship between glomerular filtration rate (GFR) and acute post-scaffold recoil (PSR) in patients undergoing bioresorbable scaffold (BVS) implantation. We included 130 patients who underwent everolimus-eluting BVS device (Absorb BVS; Abbott Vascular, Santa Clara, CA, USA) or the novolimus-eluting BVS device (Elixir Medical Corporation) implantations for single or multi-vessel disease. Clinical, angiographic variables and procedural characteristics were defined and pre-procedural GFR was calculated for each patient. Post-procedural angiographic parameters of each patient were analyzed. Primary objective of the study was to evaluate the effect of GFR on angiographic outcomes after BVS implantation while secondary objective was to compare post-procedural angiographic results between the two BVS device groups. Baseline clinical characteristics and angiographic parameters were similar between the two BVS groups. Post-procedural angiographic analysis revealed significantly lower PSR in the DESolve group than the Absorb group (0.10±0.04 vs. 0.13±0.05, p: 0.003). When PSR in the whole study population was evaluated, it was positively correlated with age, tortuosity , calcification and PBR as there was a negative correlation between GFR. Besides GFR were found to be independent predictors for PSR in all groups and the whole study population. In patients undergoing BVS implantation, pre-procedural low GFR is associated with increased post-procedural PSR. Calcification, age, PBR, dyslipidemia and tortuosity are other independent risk factors for PSR. DESolve has lower PSR when compared with Absorb. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Recurrence Quantifcation Analysis of Sentence-Level Speech Kinematics

    ERIC Educational Resources Information Center

    Jackson, Eric S.; Tiede, Mark; Riley, Michael A.; Whalen, D. H.

    2016-01-01

    Purpose: Current approaches to assessing sentence-level speech variability rely on measures that quantify variability across utterances and use normalization procedures that alter raw trajectory data. The current work tests the feasibility of a less restrictive nonlinear approach--recurrence quantification analysis (RQA)--via a procedural example…

  8. Extension Procedures for Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Nagy, Gabriel; Brunner, Martin; Lüdtke, Oliver; Greiff, Samuel

    2017-01-01

    We present factor extension procedures for confirmatory factor analysis that provide estimates of the relations of common and unique factors with external variables that do not undergo factor analysis. We present identification strategies that build upon restrictions of the pattern of correlations between unique factors and external variables. The…

  9. Adjustment of geochemical background by robust multivariate statistics

    USGS Publications Warehouse

    Zhou, D.

    1985-01-01

    Conventional analyses of exploration geochemical data assume that the background is a constant or slowly changing value, equivalent to a plane or a smoothly curved surface. However, it is better to regard the geochemical background as a rugged surface, varying with changes in geology and environment. This rugged surface can be estimated from observed geological, geochemical and environmental properties by using multivariate statistics. A method of background adjustment was developed and applied to groundwater and stream sediment reconnaissance data collected from the Hot Springs Quadrangle, South Dakota, as part of the National Uranium Resource Evaluation (NURE) program. Source-rock lithology appears to be a dominant factor controlling the chemical composition of groundwater or stream sediments. The most efficacious adjustment procedure is to regress uranium concentration on selected geochemical and environmental variables for each lithologic unit, and then to delineate anomalies by a common threshold set as a multiple of the standard deviation of the combined residuals. Robust versions of regression and RQ-mode principal components analysis techniques were used rather than ordinary techniques to guard against distortion caused by outliers Anomalies delineated by this background adjustment procedure correspond with uranium prospects much better than do anomalies delineated by conventional procedures. The procedure should be applicable to geochemical exploration at different scales for other metals. ?? 1985.

  10. Identifying factors that predict the choice and success rate of radial artery catheterisation in contemporary real world cardiology practice: a sub-analysis of the PREVAIL study data.

    PubMed

    Pristipino, Christian; Roncella, Adriana; Trani, Carlo; Nazzaro, Marco S; Berni, Andrea; Di Sciascio, Germano; Sciahbasi, Alessandro; Musarò, Salvatore Donato; Mazzarotto, Pietro; Gioffrè, Gaetano; Speciale, Giulio

    2010-06-01

    To assess: the reasons behind an operator choosing to perform radial artery catheterisation (RAC) as against femoral arterial catheterisation, and to explore why RAC may fail in the real world. A pre-determined analysis of PREVAIL study database was performed. Relevant data were collected in a prospective, observational survey of 1,052 consecutive patients undergoing invasive cardiovascular procedures at nine Italian hospitals over a one month observation period. By multivariate analysis, the independent predictors of RAC choice were having the procedure performed: (1) at a high procedural volume centre; and (2) by an operator who performs a high volume of radial procedures; clinical variables played no statistically significant role. RAC failure was predicted independently by (1) a lower operator propensity to use RAC; and (2) the presence of obstructive peripheral artery disease. A 10-fold lower rate of RAC failure was observed among operators who perform RAC for > 85% of their personal caseload than among those who use RAC < 25% of the time (3.8% vs. 33.0%, respectively); by receiver operator characteristic (ROC) analysis, no threshold value for operator RAC volume predicted RAC failure. A routine RAC in all-comers is superior to a selective strategy in terms of feasibility and success rate.

  11. Multi-objective Optimization of Departure Procedures at Gimpo International Airport

    NASA Astrophysics Data System (ADS)

    Kim, Junghyun; Lim, Dongwook; Monteiro, Dylan Jonathan; Kirby, Michelle; Mavris, Dimitri

    2018-04-01

    Most aviation communities have increasing concerns about the environmental impacts, which are directly linked to health issues for local residents near the airport. In this study, the environmental impact of different departure procedures using the Aviation Environmental Design Tool (AEDT) was analyzed. First, actual operational data were compiled at Gimpo International Airport (March 20, 2017) from an open source. Two modifications were made in the AEDT to model the operational circumstances better and the preliminary AEDT simulations were performed according to the acquired operational procedures. Simulated noise results showed good agreements with noise measurement data at specific locations. Second, a multi-objective optimization of departure procedures was performed for the Boeing 737-800. Four design variables were selected and AEDT was linked to a variety of advanced design methods. The results showed that takeoff thrust had the greatest influence and it was found that fuel burn and noise had an inverse relationship. Two points representing each fuel burn and noise optimum on the Pareto front were parsed and run in AEDT to compare with the baseline. The results showed that the noise optimum case reduced Sound Exposure Level 80-dB noise exposure area by approximately 5% while the fuel burn optimum case reduced total fuel burn by 1% relative to the baseline for aircraft-level analysis.

  12. The 'Whip-Stow' procedure: an innovative modification to the whipple procedure in the management of premalignant and malignant pancreatic head disease.

    PubMed

    Jeyarajah, D Rohan; Khithani, Amit; Curtis, David; Galanopoulos, Christos A

    2010-01-01

    Pancreaticoduodenectomy (PD) is the standard of care in the treatment of premalignant and malignant diseases of the head of the pancreas. Variability exists in anastomosis with the pancreatic remnant. This work describes a safe and easy modification for the pancreatic anastomosis after PD. Ten patients underwent the "Whip-Stow" procedure for the management of the pancreatic remnant. PD combined with a Puestow (lateral pancreaticojejunostomy [LPJ]) was completed using a running single-layer, 4-0 Prolene obeying a duct-to-mucosa technique. LPJ and pancreaticogastrostomy (PG) historical leak rates are reported to be 13.9 and 15.8 per cent, respectively. Mortality, leak, and postoperative bleeding rates were 0 per cent in all patients. The Whip-Stow was completed without loops or microscope with a 4-0 single-layer suture decreasing the time and complexity of the anastomosis. Average time was 12 minutes as compared with the 50 minutes of a 5 or 6-0 interrupted, multilayered duct-mucosa anastomosis. Benefits included a long-segment LPJ. In this study, the Whip-Stow procedure has proven to be a safe and simple approach to pancreatic anastomosis in selected patients. This new technique provides the benefit of technical ease while obeying the age old principles of obtaining a wide duct to mucosa anastomosis.

  13. Distinguishing centrarchid genera by use of lateral line scales

    USGS Publications Warehouse

    Roberts, N.M.; Rabeni, C.F.; Stanovick, J.S.

    2007-01-01

    Predator-prey relations involving fishes are often evaluated using scales remaining in gut contents or feces. While several reliable keys help identify North American freshwater fish scales to the family level, none attempt to separate the family Centrarchidae to the genus level. Centrarchidae is of particular concern in the midwestern United States because it contains several popular sport fishes, such as smallmouth bass Micropterus dolomieu, largemouth bass M. salmoides, and rock bass Ambloplites rupestris, as well as less-sought-after species of sunfishes Lepomis spp. and crappies Pomoxis spp. Differentiating sport fish from non-sport fish has important management implications. Morphological characteristics of lateral line scales (n = 1,581) from known centrarchid fishes were analyzed. The variability of measurements within and between genera was examined to select variables that were the most useful in further classifying unknown centrarchid scales. A linear discriminant analysis model was developed using 10 variables. Based on this model, 84.4% of Ambloplites scales, 81.2% of Lepomis scales, and 86.6% of Micropterus scales were classified correctly using a jackknife procedure. ?? Copyright by the American Fisheries Society 2007.

  14. Human salmonellosis: estimation of dose-illness from outbreak data.

    PubMed

    Bollaerts, Kaatje; Aerts, Marc; Faes, Christel; Grijspeerdt, Koen; Dewulf, Jeroen; Mintiens, Koen

    2008-04-01

    The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al. Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed, Osman, E-mail: osman1423@gmail.com; Patel, Mikin V.; Masrani, Abdulrahman

    PurposeTo report hepatic arterial-related complications encountered during planning and treatment angiograms for radioembolization and understand any potential-associated risk factors.Materials and Methods518 mapping or treatment angiograms for 180 patients with primary or metastatic disease to the liver treated by Yttrium-90 radioembolization between 2/2010 and 12/2015 were retrospectively reviewed. Intra-procedural complications were recorded per SIR guidelines. Patient demographics, indication for treatment, prior exposure to chemotherapeutic agents, operator experience, and disease burden were reviewed. Technical variables including type of radioembolic (glass vs. resin microspheres), indication for angiography (mapping vs. treatment), variant anatomy, and attempts at coil embolization were also assessed.ResultsThirteen (13/518, 2.5%) arterial-relatedmore » complications occurred in 13 patients. All but two complications resulted during transcatheter coil embolization to prevent non-target embolization. Complications included coil migration (n = 6), arterial dissection (n = 2), focal vessel perforation (n = 2), arterial thrombus (n = 2), and vasospasm prohibiting further arterial sub-selection (n = 1). Transarterial coiling was identified as a significant risk factor of complications on both univariate and multivariate regression analysis (odds ratio 7.8, P = 0.004). Usage of resin microspheres was also a significant risk factor (odds ratio 9.5, P = 0.042). No other technical parameters or pre-procedural variables were significant after adjusting for confounding on multivariate analysis (P > 0.05).ConclusionIntra-procedural hepatic arterial complications encountered during radioembolization were infrequent but occurred mainly during coil embolization to prevent non-target delivery to extra-hepatic arteries.« less

  16. A Study of Clinical Coding Accuracy in Surgery: Implications for the Use of Administrative Big Data for Outcomes Management.

    PubMed

    Nouraei, S A R; Hudovsky, A; Frampton, A E; Mufti, U; White, N B; Wathen, C G; Sandhu, G S; Darzi, A

    2015-06-01

    Clinical coding is the translation of clinical activity into a coded language. Coded data drive hospital reimbursement and are used for audit and research, and benchmarking and outcomes management purposes. We undertook a 2-center audit of coding accuracy across surgery. Clinician-auditor multidisciplinary teams reviewed the coding of 30,127 patients and assessed accuracy at primary and secondary diagnosis and procedure levels, morbidity level, complications assignment, and financial variance. Postaudit data of a randomly selected sample of 400 cases were reaudited by an independent team. At least 1 coding change occurred in 15,402 patients (51%). There were 3911 (13%) and 3620 (12%) changes to primary diagnoses and procedures, respectively. In 5183 (17%) patients, the Health Resource Grouping changed, resulting in income variance of £3,974,544 (+6.2%). The morbidity level changed in 2116 (7%) patients (P < 0.001). The number of assigned complications rose from 2597 (8.6%) to 2979 (9.9%) (P < 0.001). Reaudit resulted in further primary diagnosis and procedure changes in 8.7% and 4.8% of patients, respectively. The coded data are a key engine for knowledge-driven health care provision. They are used, increasingly at individual surgeon level, to benchmark performance. Surgical clinical coding is prone to subjectivity, variability, and error (SVE). Having a specialty-by-specialty understanding of the nature and clinical significance of informatics variability and adopting strategies to reduce it, are necessary to allow accurate assumptions and informed decisions to be made concerning the scope and clinical applicability of administrative data in surgical outcomes improvement.

  17. Effect of trotting speed on kinematic variables measured by use of extremity-mounted inertial measurement units in nonlame horses performing controlled treadmill exercise.

    PubMed

    Cruz, Antonio M; Vidondo, Beatriz; Ramseyer, Alessandra A; Maninchedda, Ugo E

    2018-02-01

    OBJECTIVE To assess effects of speed on kinematic variables measured by use of extremity-mounted inertial measurement units (IMUs) in nonlame horses performing controlled exercise on a treadmill. ANIMALS 10 nonlame horses. PROCEDURES 6 IMUs were attached at predetermined locations on 10 nonlame Franches Montagnes horses. Data were collected in triplicate during trotting at 3.33 and 3.88 m/s on a high-speed treadmill. Thirty-three selected kinematic variables were analyzed. Repeated-measures ANOVA was used to assess the effect of speed. RESULTS Significant differences between the 2 speeds were detected for most temporal (11/14) and spatial (12/19) variables. The observed spatial and temporal changes would translate into a gait for the higher speed characterized by increased stride length, protraction and retraction, flexion and extension, mediolateral movement of the tibia, and symmetry, but with similar temporal variables and a reduction in stride duration. However, even though the tibia coronal range of motion was significantly different between speeds, the high degree of variability raised concerns about whether these changes were clinically relevant. For some variables, the lower trotting speed apparently was associated with more variability than was the higher trotting speed. CONCLUSIONS AND CLINICAL RELEVANCE At a higher trotting speed, horses moved in the same manner (eg, the temporal events investigated occurred at the same relative time within the stride). However, from a spatial perspective, horses moved with greater action of the segments evaluated. The detected changes in kinematic variables indicated that trotting speed should be controlled or kept constant during gait evaluation.

  18. SSME/side loads analysis for flight configuration, revision A. [structural analysis of space shuttle main engine under side load excitation

    NASA Technical Reports Server (NTRS)

    Holland, W.

    1974-01-01

    This document describes the dynamic loads analysis accomplished for the Space Shuttle Main Engine (SSME) considering the side load excitation associated with transient flow separation on the engine bell during ground ignition. The results contained herein pertain only to the flight configuration. A Monte Carlo procedure was employed to select the input variables describing the side load excitation and the loads were statistically combined. This revision includes an active thrust vector control system representation and updated orbiter thrust structure stiffness characteristics. No future revisions are planned but may be necessary as system definition and input parameters change.

  19. Snowmelt Runoff Model in Japan

    NASA Technical Reports Server (NTRS)

    Ishihara, K.; Nishimura, Y.; Takeda, K.

    1985-01-01

    The preliminary Japanese snowmelt runoff model was modified so that all the input variables arc of the antecedent days and the inflow of the previous day is taken into account. A few LANDSAT images obtained in the past were effectively used to verify and modify the depletion curve induced from the snow water equivalent distribution at maximum stage and the accumulated degree days at one representative point selected in the basin. Together with the depletion curve, the relationship between the basin ide daily snowmelt amount and the air temperature at the point above are exhibited homograph form for the convenience of the model user. The runoff forecasting procedure is summarized.

  20. A design procedure for the handling qualities optimization of the X-29A aircraft

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Cox, Timothy H.

    1989-01-01

    The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.

  1. 49 CFR 542.2 - Procedures for selecting low theft light duty truck lines with a majority of major parts...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Procedures for selecting low theft light duty... TRUCK LINES TO BE COVERED BY THE THEFT PREVENTION STANDARD § 542.2 Procedures for selecting low theft... a low theft rate have major parts interchangeable with a majority of the covered major parts of a...

  2. What Variables Appear Important in Changing Traditional Inservice Training Procedures.

    ERIC Educational Resources Information Center

    Sobol, Francis Thomas

    Herein are discussed descriptive findings from the educational literature on the question of what variables appear important in changing traditional in-service training procedures. The question of the content versus the process of in-service training, important problems in in-service training programs, and implications of the important problems…

  3. Diagnostic Procedures for Detecting Nonlinear Relationships between Latent Variables

    ERIC Educational Resources Information Center

    Bauer, Daniel J.; Baldasaro, Ruth E.; Gottfredson, Nisha C.

    2012-01-01

    Structural equation models are commonly used to estimate relationships between latent variables. Almost universally, the fitted models specify that these relationships are linear in form. This assumption is rarely checked empirically, largely for lack of appropriate diagnostic techniques. This article presents and evaluates two procedures that can…

  4. [Costing nuclear medicine diagnostic procedures].

    PubMed

    Markou, Pavlos

    2005-01-01

    To the Editor: Referring to a recent special report about the cost analysis of twenty-nine nuclear medicine procedures, I would like to clarify some basic aspects for determining costs of nuclear medicine procedure with various costing methodologies. Activity Based Costing (ABC) method, is a new approach in imaging services costing that can provide the most accurate cost data, but is difficult to perform in nuclear medicine diagnostic procedures. That is because ABC requires determining and analyzing all direct and indirect costs of each procedure, according all its activities. Traditional costing methods, like those for estimating incomes and expenses per procedure or fixed and variable costs per procedure, which are widely used in break-even point analysis and the method of ratio-of-costs-to-charges per procedure may be easily performed in nuclear medicine departments, to evaluate the variability and differences between costs and reimbursement - charges.

  5. 47 CFR 90.165 - Procedures for mutually exclusive applications.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... grant, pursuant to § 1.935 of this chapter. (1) Selection methods. In selecting the application to grant, the Commission may use competitive bidding, random selection, or comparative hearings, depending on... chapter, either before or after employing selection procedures. (3) Type of filing group used. Except as...

  6. Disparities in fertility-sparing surgery in adolescent and young women with stage I ovarian dysgerminoma.

    PubMed

    Stafman, Laura L; Maizlin, Ilan I; Dellinger, Matthew; Gow, Kenneth W; Goldfarb, Melanie; Nuchtern, Jed G; Langer, Monica; Vasudevan, Sanjeev A; Doski, John J; Goldin, Adam B; Raval, Mehul; Beierle, Elizabeth A

    2018-04-01

    In many cancers, racial and socioeconomic disparities exist regarding the extent of surgery. For ovarian dysgerminoma, fertility-sparing (FS) surgery is recommended whenever possible. The aim of this study was to investigate rates of FS versus non-fertility-sparing (NFS) procedures for stage I ovarian dysgerminoma in adolescents and young adults (AYAs) by ethnicity/race and socioeconomic status. The National Cancer Data Base was queried for patients with ovarian dysgerminoma from 1998 to 2012. After selecting patients aged 15-39 y with stage I disease, a multivariate regression analysis was performed, and rates of FS and NFS procedures were compared, first according to ethnicity/race, and then by socioeconomic surrogate variables. Among the 687 AYAs with stage I ovarian dysgerminoma, there was no significant difference in rates of FS and NFS procedures based on ethnicity/race alone (P = 0.17), but there was a significant difference in procedure type for all three socioeconomic surrogates. The uninsured had higher NFS rates (30%) than those with government (21%) or private (19%) insurance (P = 0.036). Those in the poorest ZIP codes had almost twice the rate of NFS procedures (31%) compared with those in the most affluent ZIP codes (17%). For those in the least-educated regions, 24% underwent NFS procedures compared to 14% in the most-educated areas (P = 0.027). AYAs with stage I ovarian dysgerminoma in lower socioeconomic groups were more likely to undergo NFS procedures than those in higher socioeconomic groups, but there was no difference in rates of FS versus NFS procedures by ethnicity/race. Approaches aimed at reducing socioeconomic disparities require further examination. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Improving the safety and quality of nursing care through standardized operating procedures in Bosnia and Herzegovina.

    PubMed

    Ausserhofer, Dietmar; Rakic, Severin; Novo, Ahmed; Dropic, Emira; Fisekovic, Eldin; Sredic, Ana; Van Malderen, Greet

    2016-06-01

    We explored how selected 'positive deviant' healthcare facilities in Bosnia and Herzegovina approach the continuous development, adaptation, implementation, monitoring and evaluation of nursing-related standard operating procedures. Standardized nursing care is internationally recognized as a critical element of safe, high-quality health care; yet very little research has examined one of its key instruments: nursing-related standard operating procedures. Despite variability in Bosnia and Herzegovina's healthcare and nursing care quality, we assumed that some healthcare facilities would have developed effective strategies to elevate nursing quality and safety through the use of standard operating procedures. Guided by the 'positive deviance' approach, we used a multiple-case study design to examine a criterion sample of four facilities (two primary healthcare centres and two hospitals), collecting data via focus groups and individual interviews. In each studied facility, certification/accreditation processes were crucial to the initiation of continuous development, adaptation, implementation, monitoring and evaluation of nursing-related SOPs. In one hospital and one primary healthcare centre, nurses working in advanced roles (i.e. quality coordinators) were responsible for developing and implementing nursing-related standard operating procedures. Across the four studied institutions, we identified a consistent approach to standard operating procedures-related processes. The certification/accreditation process is enabling necessary changes in institutions' organizational cultures, empowering nurses to take on advanced roles in improving the safety and quality of nursing care. Standardizing nursing procedures is key to improve the safety and quality of nursing care. Nursing and Health Policy are needed in Bosnia and Herzegovina to establish a functioning institutional framework, including regulatory bodies, educational systems for developing nurses' capacities or the inclusion of nursing-related standard operating procedures in certification/accreditation standards. © 2016 International Council of Nurses.

  8. Transoral Incisionless Fundoplication (TIF 2.0): A Meta-Analysis of Three Randomized, Controlled Clinical Trials.

    PubMed

    Gerson, Lauren; Stouch, Bruce; Lobonţiu, Adrian

    2018-01-01

    The TIF procedure has emerged as an endoscopic treatment for patients with refractory gastro-esophageal reflux disease (GERD). Previous systematic reviews of the TIF procedure conflated findings from studies with modalities that do not reflect the current 2.0 procedure technique or refined data-backed patient selection criteria. A meta-analysis was conducted using data only from randomized studies that assessed the TIF 2.0 procedure compared to a control. The purpose of the meta-analysis was to determine the efficacy and long-term outcomes associated with performance of the TIF 2.0 procedure in patients with chronic long-term refractory GERD on optimized PPI therapy, including esophageal pH, PPI utilization and quality of life. Methods: Three prospective research questions were predicated on the outcomes of the TIF procedure compared to patients who received PPI therapy or sham, concomitant treatment for GERD, and the patient-reported quality of life. Event rates were calculated using the random effect model. Since the time of follow-up post-TIF procedure was variable, analysis was performed to incorporate the time of follow-up for each individual patient at the 3-year time point. Results: Results from this meta-analysis, including data from 233 patients, demonstrated that TIF subjects at 3 years had improved esophageal pH, a decrease in PPI utilization, and improved quality of life. Conclusions: In a meta-analysis of randomized, controlled trials (RCTs), the TIF procedure data for patients with GERD refractory to PPI's produces significant changes, compared with sham or PPI therapy, in esophageal pH, decreased PPI utilization, and improved quality of life. Celsius.

  9. Postoperative Early Major and Minor Complications in Laparoscopic Vertical Sleeve Gastrectomy (LVSG) Versus Laparoscopic Roux-en-Y Gastric Bypass (LRYGB) Procedures: A Meta-Analysis and Systematic Review.

    PubMed

    Osland, Emma; Yunus, Rossita Mohamad; Khan, Shahjahan; Alodat, Tareq; Memon, Breda; Memon, Muhammed Ashraf

    2016-10-01

    Laparoscopic Roux-en-Y gastric bypass (LRYGB) and laparoscopic vertical sleeve gastrectomy (LVSG) have been proposed as cost-effective strategies to manage obesity-related chronic disease. The aim of this meta-analysis and systematic review was to compare the "early postoperative complication rate i.e. within 30-days" reported from randomized control trials (RCTs) comparing these two procedures. RCTs comparing the early complication rates following LVSG and LRYGB between 2000 and 2015 were selected from PubMed, Medline, Embase, Science Citation Index, Current Contents, and the Cochrane database. The outcome variables analyzed included 30-day mortality, major and minor complications and interventions required for their management, length of hospital stay, readmission rates, operating time, and conversions from laparoscopic to open procedures. Six RCTs involving a total of 695 patients (LVSG n = 347, LRYGB n = 348) reported on early major complications. A statistically significant reduction in relative odds of early major complications favoring the LVSG procedure was noted (p = 0.05). Five RCTs representing 633 patients (LVSG n = 317, LRYGB n = 316) reported early minor complications. A non-statically significant reduction in relative odds of 29 % favoring the LVSG procedure was observed for early minor complications (p = 0.4). However, other outcomes directly related to complications which included reoperation rates, readmission rate, and 30-day mortality rate showed comparable effect size for both surgical procedures. This meta-analysis and systematic review of RCTs suggests that fewer early major and minor complications are associated with LVSG compared with LRYGB procedure. However, this does not translate into higher readmission rate, reoperation rate, or 30-day mortality for either procedure.

  10. The feasibility of harmonizing gluten ELISA measurements.

    PubMed

    Rzychon, Malgorzata; Brohée, Marcel; Cordeiro, Fernando; Haraszi, Reka; Ulberth, Franz; O'Connor, Gavin

    2017-11-01

    Many publications have highlighted that routine ELISA methods do not give rise to equivalent gluten content measurement results. In this study, we assess this variation between results and its likely impact on the enforcement of the EU gluten-free legislation. This study systematically examines the feasibility of harmonizing gluten ELISA assays by the introduction of: a common extraction procedure; a common calibrator, such as a pure gluten extract and an incurred matrix material. The comparability of measurements is limited by a weak correlation between kit results caused by differences in the selectivity of the methods. This lack of correlation produces bias that cannot be corrected by using reference materials alone. The use of a common calibrator reduced the between-assay variability to some extent, but variation due to differences in selectivity of the assays was unaffected. Consensus on robust markers and their conversion to "gluten content" are required. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. A Group Decision Framework with Intuitionistic Preference Relations and Its Application to Low Carbon Supplier Selection.

    PubMed

    Tong, Xiayu; Wang, Zhou-Jing

    2016-09-19

    This article develops a group decision framework with intuitionistic preference relations. An approach is first devised to rectify an inconsistent intuitionistic preference relation to derive an additive consistent one. A new aggregation operator, the so-called induced intuitionistic ordered weighted averaging (IIOWA) operator, is proposed to aggregate individual intuitionistic fuzzy judgments. By using the mean absolute deviation between the original and rectified intuitionistic preference relations as an order inducing variable, the rectified consistent intuitionistic preference relations are aggregated into a collective preference relation. This treatment is presumably able to assign different weights to different decision-makers' judgments based on the quality of their inputs (in terms of consistency of their original judgments). A solution procedure is then developed for tackling group decision problems with intuitionistic preference relations. A low carbon supplier selection case study is developed to illustrate how to apply the proposed decision model in practice.

  12. A Group Decision Framework with Intuitionistic Preference Relations and Its Application to Low Carbon Supplier Selection

    PubMed Central

    Tong, Xiayu; Wang, Zhou-Jing

    2016-01-01

    This article develops a group decision framework with intuitionistic preference relations. An approach is first devised to rectify an inconsistent intuitionistic preference relation to derive an additive consistent one. A new aggregation operator, the so-called induced intuitionistic ordered weighted averaging (IIOWA) operator, is proposed to aggregate individual intuitionistic fuzzy judgments. By using the mean absolute deviation between the original and rectified intuitionistic preference relations as an order inducing variable, the rectified consistent intuitionistic preference relations are aggregated into a collective preference relation. This treatment is presumably able to assign different weights to different decision-makers’ judgments based on the quality of their inputs (in terms of consistency of their original judgments). A solution procedure is then developed for tackling group decision problems with intuitionistic preference relations. A low carbon supplier selection case study is developed to illustrate how to apply the proposed decision model in practice. PMID:27657097

  13. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2004-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  14. Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.

    2005-01-01

    A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.

  15. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  16. Constructing Compact Takagi-Sugeno Rule Systems: Identification of Complex Interactions in Epidemiological Data

    PubMed Central

    Zhou, Shang-Ming; Lyons, Ronan A.; Brophy, Sinead; Gravenor, Mike B.

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data. PMID:23272108

  17. Impact of multicollinearity on small sample hydrologic regression models

    NASA Astrophysics Data System (ADS)

    Kroll, Charles N.; Song, Peter

    2013-06-01

    Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.

  18. A revision of chiggers of the minuta species-group (Acari: Trombiculidae: Neotrombicula Hirst, 1925) using multivariate morphometrics.

    PubMed

    Stekolnikov, Alexandr A; Klimov, Pavel B

    2010-09-01

    We revise chiggers belonging to the minuta-species group (genus Neotrombicula Hirst, 1925) from the Palaearctic using size-free multivariate morphometrics. This approach allowed us to resolve several diagnostic problems. We show that the widely distributed Neotrombicula scrupulosa Kudryashova, 1993 forms three spatially and ecologically isolated groups different from each other in size or shape (morphometric property) only: specimens from the Caucasus are distinct from those from Asia in shape, whereas the Asian specimens from plains and mountains are different from each other in size. We developed a multivariate classification model to separate three closely related species: N. scrupulosa, N. lubrica Kudryashova, 1993 and N. minuta Schluger, 1966. This model is based on five shape variables selected from an initial 17 variables by a best subset analysis using a custom size-correction subroutine. The variable selection procedure slightly improved the predictive power of the model, suggesting that it not only removed redundancy but also reduced 'noise' in the dataset. The overall classification accuracy of this model is 96.2, 96.2 and 95.5%, as estimated by internal validation, external validation and jackknife statistics, respectively. Our analyses resulted in one new synonymy: N. dimidiata Stekolnikov, 1995 is considered to be a synonym of N. lubrica. Both N. scrupulosa and N. lubrica are recorded from new localities. A key to species of the minuta-group incorporating results from our multivariate analyses is presented.

  19. Constructing compact Takagi-Sugeno rule systems: identification of complex interactions in epidemiological data.

    PubMed

    Zhou, Shang-Ming; Lyons, Ronan A; Brophy, Sinead; Gravenor, Mike B

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data.

  20. A comparison of two nano-sized particle air filtration tests in the diameter range of 10 to 400 nanometers

    NASA Astrophysics Data System (ADS)

    Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.

    2007-01-01

    Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.

  1. Utility of Von Pechman synthesis of coumarin reaction for development of spectrofluorimetric method for quantitation of salmeterol xinafoate in pharmaceutical preparations and human plasma.

    PubMed

    Awad, Mohamed; Hammad, Mohamed A; Abdel-Megied, Ahmed M; Omar, Mahmoud A

    2018-04-30

    Simple, precise and selective spectrofluorimetric technique was evolved for quantitation of selective β 2 agonist drug namely salmeterol xinafoate (SAL). Utilizing its phenolic nature, a method was described based on the reaction of the studied drug with ethyl acetoacetate (EAA) to yield extremely fluorescent coumarin product which can be detected at 480 nm (λ ex  = 420 nm). The procedure obeys Beer's law with a correlation coefficient of r = 0.9999 in the concentration range between 500 and 5000 ng ml -1 with and 177 ng ml -1 for limit of detection (LOD) and limit of quantification (LOQ), respectively. Diverse reaction variables influencing the firmness and formation of the coumarin product were accurately examined and modified to ensure greatest sensitivity of the procedure. The proposed technique was performed and examined according to the US Food and Drug Administration (FDA) guidelines for bio-analytical methods and was efficiently applied for quantitation of SAL in both pharmaceutical preparations (% recovery = 100.06 ± 1.07) and spiked human plasma (% recovery = 96.64-97.14 ± 1.01-1.52). Copyright © 2018 John Wiley & Sons, Ltd.

  2. A DNA fingerprinting procedure for ultra high-throughput genetic analysis of insects.

    PubMed

    Schlipalius, D I; Waldron, J; Carroll, B J; Collins, P J; Ebert, P R

    2001-12-01

    Existing procedures for the generation of polymorphic DNA markers are not optimal for insect studies in which the organisms are often tiny and background molecular information is often non-existent. We have used a new high throughput DNA marker generation protocol called randomly amplified DNA fingerprints (RAF) to analyse the genetic variability in three separate strains of the stored grain pest, Rhyzopertha dominica. This protocol is quick, robust and reliable even though it requires minimal sample preparation, minute amounts of DNA and no prior molecular analysis of the organism. Arbitrarily selected oligonucleotide primers routinely produced approximately 50 scoreable polymorphic DNA markers, between individuals of three independent field isolates of R. dominica. Multivariate cluster analysis using forty-nine arbitrarily selected polymorphisms generated from a single primer reliably separated individuals into three clades corresponding to their geographical origin. The resulting clades were quite distinct, with an average genetic difference of 37.5 +/- 6.0% between clades and of 21.0 +/- 7.1% between individuals within clades. As a prelude to future gene mapping efforts, we have also assessed the performance of RAF under conditions commonly used in gene mapping. In this analysis, fingerprints from pooled DNA samples accurately and reproducibly reflected RAF profiles obtained from individual DNA samples that had been combined to create the bulked samples.

  3. The Influence of Parenting Style and Child Temperament on Child-Parent-Dentist Interactions.

    PubMed

    Aminabadi, Naser Asl; Deljavan, Alireza Sighari; Jamali, Zahra; Azar, Fatemeh Pournaghi; Oskouei, Sina Ghertasi

    2015-01-01

    This study aimed to investigate the interaction between parenting style and child's temperament as modulators of anxiety and behavior in children during the dental procedure. Healthy four- to six-year-olds (n equals 288), with carious primary molars scheduled to receive amalgam fillings were selected. The Primary Caregivers Practices Report was used to assess the parenting style, and the Children's Behavior Questionnaire-Very Short Form was used to evaluate child temperament. Children were managed using common behavior management strategies. Child behavior and anxiety during the procedure were assessed using the Frankl behavior rating scale and the verbal skill scale, respectively. Spearman's correlation coefficient was used to examine the correlation among variables. Authoritative parenting style was positively related to positive child's behavior (P<.05) and negatively related to child's anxiety (P<.05). A positive relationship existed between permissive subscale and negative behaviors (P<.05) and child's anxiety (P<.05). There was a significant direct effect of authoritative parenting style on the effortful control trait (P<.05) and permissive parent style on the child negative affectivity (P<.05). Parenting style appeared to mediate child temperament and anxiety, and was related to the child's behavior. Parenting style should be considered in the selection of behavior guidance techniques.

  4. A review of covariate selection for non-experimental comparative effectiveness research.

    PubMed

    Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler

    2013-11-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.

  5. A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research

    PubMed Central

    Sauer, Brian C.; Brookhart, Alan; Roy, Jason; Vanderweele, Tyler

    2014-01-01

    This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research (CER), and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for on a common cause pathway between treatment and outcome can remove confounding, while adjustment for other structural types may increase bias. For this reason variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely know. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses the high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher’s knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically-derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. PMID:24006330

  6. Race Differences in Cardiac Catheterization: The Role of Social Contextual Variables

    PubMed Central

    Kressin, Nancy R.

    2010-01-01

    BACKGROUND Race differences in the receipt of invasive cardiac procedures are well-documented but the etiology remains poorly understood. OBJECTIVE We examined how social contextual variables were related to race differences in the likelihood of receiving cardiac catheterization in a sample of veterans who were recommended to undergo the procedure by a physician. DESIGN Prospective observational cohort study. PARTICIPANTS A subsample from a study examining race disparities in cardiac catheterization of 48 Black/African American and 189 White veterans who were recommended by a physician to undergo cardiac catheterization. MEASURES We assessed social contextual variables (e.g., knowing somebody who had the procedure, being encouraged by family or friends), clinical variables (e.g., hypertension, maximal medical therapy), and if participants received cardiac catheterization at any point during the study. KEY RESULTS Blacks/African Americans were less likely to undergo cardiac catheterization compared to Whites even after controlling for age, education, and clinical variables (OR = 0.31; 95% CI, 0.13, 0.75). After controlling for demographic and clinical variables, three social contextual variables were significantly related to increased likelihood of receiving catheterization: knowing someone who had undergone the procedure (OR = 3.14; 95% CI, 1.70, 8.74), social support (OR = 2.05; 95% CI, 1.17, 2.78), and being encouraged by family to have procedure (OR = 1.45; 95% CI, 1.08, 1.90). After adding the social contextual variables, race was no longer significantly related to the likelihood of receiving catheterization, thus suggesting that social context plays an important role in the relationship between race and cardiac catheterization. CONCLUSIONS Our results suggest that social contextual factors are related to the likelihood of receiving recommended care. In addition, accounting for these relationships attenuated the observed race disparities between Whites and Blacks/African Americans who were recommended to undergo cardiac catheterization by their physicians. PMID:20383600

  7. Does Self-Selection Affect Samples’ Representativeness in Online Surveys? An Investigation in Online Video Game Research

    PubMed Central

    van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-01-01

    Background The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Objective Our objective was to explore the representativeness of a self-selected sample of online gamers using online players’ virtual characters (avatars). Methods All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars’ characteristics were defined using various games’ scores, reported on the WoW’s official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. Results We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Conclusions Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted. PMID:25001007

  8. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Selecting Students for Training in Health Care. A Practical Guide to Improving Selection Procedures. WHO Offset Publication No. 74.

    ERIC Educational Resources Information Center

    Bennett, Mick; Wakeford, Richard

    This guide is intended to help those responsible for choosing health care trainees to develop and improve their selection procedures. Special reference is given to health workers in maternal and child health. Chapter 1 deals with health care policy implications for selection of trainees, the different functions of selection and conflicts that…

  10. Paternal influences on pregnancy complications and birth outcomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleghorn de Rohrmoser, D.C.

    1992-01-01

    The purpose of this study was to investigate the relationship of selected characteristics of the paternal work environment and occupational history to the incidence of complications in pregnancy, complications in labor and anomalies in birth outcomes. The literature suggested that male exposure to teratogenic hazards in the form of radiation and chemical compounds, primarily in the form of solvents, has been implicated in reproductive disorders and malformed offspring in animals. Similarly, some recent research suggests that the exposure of male workers to such hazards on their job may have consequences for their spouses and children. Based on these experimental researchmore » studies and analyses of persons working in high risk occupations, a broader study of the potential contribution of paternal work environment variables to the success of pregnancy and birth outcomes seemed warranted. Based upon the literature review, a model was proposed for predicting complications in pregnancy, complications in labor and birth outcome (normal birth, low birth weight, congenital malformations and fetal death). From the 1980 National Natality Survey and the 1980 National Fetal Mortality Survey, four sub-samples of married couples, with both husband and wife employed, were selected on the basis of one of the four birth outcomes. The model called for controlling a range of maternal intrinsic and extrinsic health and behavioral variables known to be related to birth outcomes. Multiple logistic regression procedures were used to analyze the effects of father's exposure to radiation and solvents on the job, to complications in pregnancy and labor, and to birth outcome, while controlling for maternal variables. The results indicated that none of the paternal variables were predictors of complications in labor. Further, there was no clear pattern of results, though father's degree of exposure to solvents, and exposures to radiation did reach significance in some analyses.« less

  11. An audit of the nature and impact of clinical coding subjectivity variability and error in otolaryngology.

    PubMed

    Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S

    2013-12-01

    To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.

  12. Guidelines for Professional Staff Selection. A Guide to Job Responsibilities of the School Personnel Administrator.

    ERIC Educational Resources Information Center

    American Association of School Personnel Administrators, Seven Hills, OH.

    These guidelines are intended to provide personnel administrators with a means of evaluating their current practices and procedures in teacher selection. The guidelines cover recruitment, hiring criteria, employment interviews, and the follow-up to selection. A suggested personnel selection procedure outlines application, file preparation, and the…

  13. 48 CFR 715.370 - Alternative source selection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Alternative source selection procedures. 715.370 Section 715.370 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715...

  14. Estimating times of surgeries with two component procedures: comparison of the lognormal and normal models.

    PubMed

    Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E

    2003-01-01

    Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.

  15. Omnibus Tests for Interactions in Repeated Measures Designs with Dichotomous Dependent Variables.

    ERIC Educational Resources Information Center

    Serlin, Ronald C.; Marascuilo, Leonard A.

    When examining a repeated measures design with independent groups for a significant group by trial interaction, classical analysis of variance or multivariate procedures can be used if the assumptions underlying the tests are met. Neither procedure may be justified for designs with small sample sizes and dichotomous dependent variables. An omnibus…

  16. Structural Analysis of Correlated Factors: Lessons from the Verbal-Performance Dichotomy of the Wechsler Scales.

    ERIC Educational Resources Information Center

    Macmann, Gregg M.; Barnett, David W.

    1994-01-01

    Describes exploratory and confirmatory analyses of verbal-performance procedures to illustrate concepts and procedures for analysis of correlated factors. Argues that, based on convergent and discriminant validity criteria, factors should have higher correlations with variables that they purport to measure than with other variables. Discusses…

  17. Statistical Assessment of Variability of Terminal Restriction Fragment Length Polymorphism Analysis Applied to Complex Microbial Communities ▿ †

    PubMed Central

    Rossi, Pierre; Gillet, François; Rohrbach, Emmanuelle; Diaby, Nouhou; Holliger, Christof

    2009-01-01

    The variability of terminal restriction fragment polymorphism analysis applied to complex microbial communities was assessed statistically. Recent technological improvements were implemented in the successive steps of the procedure, resulting in a standardized procedure which provided a high level of reproducibility. PMID:19749066

  18. Reducing Covert Self-Injurious Behavior Maintained by Automatic Reinforcement through a Variable Momentary DRO Procedure

    ERIC Educational Resources Information Center

    Toussaint, Karen A.; Tiger, Jeffrey H.

    2012-01-01

    Covert self-injurious behavior (i.e., behavior that occurs in the absence of other people) can be difficult to treat. Traditional treatments typically have involved sophisticated methods of observation and often have employed positive punishment procedures. The current study evaluated the effectiveness of a variable momentary differential…

  19. Isolation and characterization of anti c-met single chain fragment variable (scFv) antibodies.

    PubMed

    Qamsari, Elmira Safaie; Sharifzadeh, Zahra; Bagheri, Salman; Riazi-Rad, Farhad; Younesi, Vahid; Abolhassani, Mohsen; Ghaderi, Sepideh Safaei; Baradaran, Behzad; Somi, Mohammad Hossein; Yousefi, Mehdi

    2017-12-01

    The receptor tyrosine kinase (RTK) Met is the cell surface receptor for hepatocyte growth factor (HGF) involved in invasive growth programs during embryogenesis and tumorgenesis. There is compelling evidence suggesting important roles for c-Met in colorectal cancer proliferation, migration, invasion, angiogenesis, and survival. Hence, a molecular inhibitor of an extracellular domain of c-Met receptor that blocks c-Met-cell surface interactions could be of great thera-peutic importance. In an attempt to develop molecular inhibitors of c-Met, single chain variable fragment (scFv) phage display libraries Tomlinson I + J against a specific synthetic oligopeptide from the extracellular domain of c-Met receptor were screened; selected scFv were then characterized using various immune techniques. Three c-Met specific scFv (ES1, ES2, and ES3) were selected following five rounds of panning procedures. The scFv showed specific binding to c-Met receptor, and significantly inhibited proliferation responses of a human colorectal carcinoma cell line (HCT-116). Moreover, anti- apoptotic effects of selected scFv antibodies on the HCT-116 cell line were also evaluated using Annexin V/PI assays. The results demonstrated rates of apoptotic cell death of 46.0, 25.5, and 37.8% among these cells were induced by use of ES1, ES2, and ES3, respectively. The results demonstrated ability to successfully isolate/char-acterize specific c-Met scFv that could ultimately have a great therapeutic potential in immuno-therapies against (colorectal) cancers.

  20. QSPR models for half-wave reduction potential of steroids: a comparative study between feature selection and feature extraction from subsets of or entire set of descriptors.

    PubMed

    Hemmateenejad, Bahram; Yazdani, Mahdieh

    2009-02-16

    Steroids are widely distributed in nature and are found in plants, animals, and fungi in abundance. A data set consists of a diverse set of steroids have been used to develop quantitative structure-electrochemistry relationship (QSER) models for their half-wave reduction potential. Modeling was established by means of multiple linear regression (MLR) and principle component regression (PCR) analyses. In MLR analysis, the QSPR models were constructed by first grouping descriptors and then stepwise selection of variables from each group (MLR1) and stepwise selection of predictor variables from the pool of all calculated descriptors (MLR2). Similar procedure was used in PCR analysis so that the principal components (or features) were extracted from different group of descriptors (PCR1) and from entire set of descriptors (PCR2). The resulted models were evaluated using cross-validation, chance correlation, application to prediction reduction potential of some test samples and accessing applicability domain. Both MLR approaches represented accurate results however the QSPR model found by MLR1 was statistically more significant. PCR1 approach produced a model as accurate as MLR approaches whereas less accurate results were obtained by PCR2 approach. In overall, the correlation coefficients of cross-validation and prediction of the QSPR models resulted from MLR1, MLR2 and PCR1 approaches were higher than 90%, which show the high ability of the models to predict reduction potential of the studied steroids.

  1. Understanding logistic regression analysis.

    PubMed

    Sperandei, Sandro

    2014-01-01

    Logistic regression is used to obtain odds ratio in the presence of more than one explanatory variable. The procedure is quite similar to multiple linear regression, with the exception that the response variable is binomial. The result is the impact of each variable on the odds ratio of the observed event of interest. The main advantage is to avoid confounding effects by analyzing the association of all variables together. In this article, we explain the logistic regression procedure using examples to make it as simple as possible. After definition of the technique, the basic interpretation of the results is highlighted and then some special issues are discussed.

  2. Evaluation of Currently Used Dental Management Indicators and Development of New Management and Performance Indicators.

    DTIC Science & Technology

    1981-07-01

    for selected procedures g. Pericdontal Procedures Per Dentist Formula: Number of Perio Procedures Comr’.neted* Nunber of Dentists Assigned *See appendi...02336 - Resin, Complex A-2 Selected Endodontic Procedures for Endo teeth per assigned DDS ratio: 03311 - Anterior, 1 Canal Filled 03312 - Anterior, 2 or...04271 - Free Soft Tissue Graft 04272 - Vestibulo.-lasty 04340 - Perio Scale and Root Planning *Some or these procedures are not end-item entities as are

  3. 48 CFR 906.102 - Use of competitive procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... accordance with subpart 936.6 and 48 CFR subpart 36.6. (4) Program research and development announcements shall follow the competitive selection procedures for the award of research proposals in accordance with... follow the competitive selection procedures for award of these proposals in accordance with subpart 917...

  4. Incidence of bacterial contamination and predisposing factors during bone and tendon allograft procurement.

    PubMed

    Terzaghi, Clara; Longo, Alessia; Legnani, Claudio; Bernasconi, Davide Paolo; Faré, Maristella

    2015-03-01

    The aim of this study was to analyze factors contributing to bacteriological contamination of bone and tendon allograft. Between 2008 and 2011, 2,778 bone and tendon allografts obtained from 196 organ and tissue donors or tissue donors only were retrospectively analysed. Several variables were taken into account: donor type (organ and tissue donors vs. tissue donor), cause of death, time interval between death and tissue procurement, duration of the procurement procedure, type of allografts, number of team members, number of trainees members, associated surgical procedures, positivity to haemoculture, type of procurement. The overall incidence of graft contamination was 23 %. The cause of death, the procurement time, the duration of procurement, the associated surgical procedures were not associated with increased risk of contamination. Significant effect on contamination incidence was observed for the number of staff members performing the procurement. In addition, our study substantiated significantly higher contamination rate among bone allografts than from tendon grafts. According to these observations, in order to minimize the contamination rate of procured musculoskeletal allografts, we recommend appropriate donor selection, use of standard sterile techniques, immediate packaging of each allograft to reduce graft exposure. Allograft procurement should be performed by a small surgical team.

  5. An Automated Procedure for Evaluating Song Imitation

    PubMed Central

    Mandelblat-Cerf, Yael; Fee, Michale S.

    2014-01-01

    Songbirds have emerged as an excellent model system to understand the neural basis of vocal and motor learning. Like humans, songbirds learn to imitate the vocalizations of their parents or other conspecific “tutors.” Young songbirds learn by comparing their own vocalizations to the memory of their tutor song, slowly improving until over the course of several weeks they can achieve an excellent imitation of the tutor. Because of the slow progression of vocal learning, and the large amounts of singing generated, automated algorithms for quantifying vocal imitation have become increasingly important for studying the mechanisms underlying this process. However, methodologies for quantifying song imitation are complicated by the highly variable songs of either juvenile birds or those that learn poorly because of experimental manipulations. Here we present a method for the evaluation of song imitation that incorporates two innovations: First, an automated procedure for selecting pupil song segments, and, second, a new algorithm, implemented in Matlab, for computing both song acoustic and sequence similarity. We tested our procedure using zebra finch song and determined a set of acoustic features for which the algorithm optimally differentiates between similar and non-similar songs. PMID:24809510

  6. Do modern techniques improve core decompression outcomes for hip osteonecrosis?

    PubMed

    Marker, David R; Seyler, Thorsten M; Ulrich, Slif D; Srivastava, Siddharth; Mont, Michael A

    2008-05-01

    Core decompression procedures have been used in osteonecrosis of the femoral head to attempt to delay the joint destruction that may necessitate hip arthroplasty. The efficacy of core decompressions has been variable with many variations of technique described. To determine whether the efficacy of this procedure has improved during the last 15 years using modern techniques, we compared recently reported radiographic and clinical success rates to results of surgeries performed before 1992. Additionally, we evaluated the outcomes of our cohort of 52 patients (79 hips) who were treated with multiple small-diameter drillings. There was a decrease in the proportion of patients undergoing additional surgeries and an increase in radiographic success when comparing pre-1992 results to patients treated in the last 15 years. However, there were fewer Stage III hips in the more recent reports, suggesting that patient selection was an important reason for this improvement. The results of the small-diameter drilling cohort were similar to other recent reports. Patients who had small lesions and were Ficat Stage I had the best results with 79% showing no radiographic progression. Our study confirms core decompression is a safe and effective procedure for treating early stage femoral head osteonecrosis.

  7. A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles

    NASA Astrophysics Data System (ADS)

    Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.

    The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.

  8. A critical review of variables affecting the accuracy and false-negative rate of sentinel node biopsy procedures in early breast cancer.

    PubMed

    Vijayakumar, Vani; Boerner, Philip S; Jani, Ashesh B; Vijayakumar, Srinivasan

    2005-05-01

    Radionuclide sentinel lymph node localization and biopsy is a staging procedure that is being increasingly used to evaluate patients with invasive breast cancer who have clinically normal axillary nodes. The most important prognostic indicator in patients with invasive breast cancer is the axillary node status, which must also be known for correct staging, and influences the selection of adjuvant therapies. The accuracy of sentinel lymph node localization depends on a number of factors, including the injection method, the operating surgeon's experience and the hospital setting. The efficacy of sentinel lymph node mapping can be determined by two measures: the sentinel lymph node identification rate and the false-negative rate. Of these, the false-negative rate is the most important, based on a review of 92 studies. As sentinel lymph node procedures vary widely, nuclear medicine physicians and radiologists must be acquainted with the advantages and disadvantages of the various techniques. In this review, the factors that influence the success of different techniques are examined, and studies which have investigated false-negative rates and/or sentinel lymph node identification rates are summarized.

  9. Study of the Decision-Making Procedures for the Acquisition of Science Library Materials and the Relation of These Procedures to the Requirements of College and University Library Patrons.

    ERIC Educational Resources Information Center

    Lane, David O.

    The idea that there was a need for formal study of the methods by which titles are selected for addition to the collections of academic science libraries resulted in this investigation of the selection processes of these libraries. Specifically, the study concentrates on the selection procedures in three sciences: biology, chemistry, and physics.…

  10. Selecting predictors for discriminant analysis of species performance: an example from an amphibious softwater plant.

    PubMed

    Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M

    2012-03-01

    Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.

  11. SU-E-J-275: Impact of the Intra and Inter Observer Variability in the Delineation of Parotid Glands On the Dose Calculation During Head and Neck Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jodda, A; Piotrowski, T

    2014-06-01

    Purpose: The intra- and inter-observer variability in delineation of the parotids on the kilo-voltage computed tomography (kVCT) and mega-voltage computed tomography (MVCT) were examined to establish their impact on the dose calculation during adaptive head and neck helical tomotherapy (HT). Methods: Three observers delineated left and right parotids for ten randomly selected patients with oropharynx cancer treated on HT. The pre-treatment kVCT and the MVCT from the first fraction of irradiation were selected to delineation. The delineation procedure was repeated three times by each observer. The parotids were delineated according to the institutional protocol. The analyses included intra-observer reproducibility andmore » inter-structure, -observer and -modality variability of the volume and dose. Results: The differences between the left and right parotid outlines were not statistically significant (p>0.3). The reproducibility of the delineation was confirmed for each observer on the kVCT (p>0.2) and on the MVCT (p>0.1). The inter-observer variability of the outlines was significant (p<0.001) as well as the inter-modality variability (p<0.006). The parotids delineated on the MVCT were 10% smaller than on the kVCT. The inter-observer variability of the parotids delineation did not affect the average dose (p=0.096 on the kVCT and p=0.176 on the MVCT). The dose calculated on the MVCT was higher by 3.3% than dose from the kVCT (p=0.009). Conclusion: Usage of the institutional protocols for the parotids delineation reduces intra-observer variability and increases reproducibility of the outlines. These protocols do not eliminate delineation differences between the observers, but these differences are not clinically significant and do not affect average doses in the parotids. The volumes of the parotids delineated on the MVCT are smaller than on the kVCT, which affects the differences in the calculated doses.« less

  12. Implications of the New EEOC Guidelines.

    ERIC Educational Resources Information Center

    Dhanens, Thomas P.

    1979-01-01

    In the next few years employers will frequently be confronted with the fact that they cannot rely on undocumented, subjective selection procedures. As long as disparate impact exists in employee selection, employers will be required to validate whatever selection procedures they use. (Author/IRT)

  13. Experimental selective posterior semicircular canal laser deafferentation.

    PubMed

    Naguib, Maged B

    2005-05-01

    In this experimental study, we attempted to perform selective deafferentation of the posterior semicircular canal ampulla of guinea pigs using carbon dioxide laser beam. The results of this study document the efficacy of this procedure in achieving deafferentation of the posterior semicircular canal safely with regards to the other semicircular canals, the otolithic organ and the organ of hearing. Moreover, the procedure is performed with relative ease compared with other procedures previously described for selective deafferentation of the posterior semicircular canal. The clinical application of such a procedure for the treatment of intractable benign paroxysmal positional vertigo in humans is suggested.

  14. Variable Neighborhood Search Heuristics for Selecting a Subset of Variables in Principal Component Analysis

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Singh, Renu; Steinley, Douglas

    2009-01-01

    The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…

  15. Getting even or moving on? Power, procedural justice, and types of offense as predictors of revenge, forgiveness, reconciliation, and avoidance in organizations.

    PubMed

    Aquino, Karl; Tripp, Thomas M; Bies, Robert J

    2006-05-01

    A field study and an experimental study examined relationships among organizational variables and various responses of victims to perceived wrongdoing. Both studies showed that procedural justice climate moderates the effect of organizational variables on the victim's revenge, forgiveness, reconciliation, or avoidance behaviors. In Study 1, a field study, absolute hierarchical status enhanced forgiveness and reconciliation, but only when perceptions of procedural justice climate were high; relative hierarchical status increased revenge, but only when perceptions of procedural justice climate were low. In Study 2, a laboratory experiment, victims were less likely to endorse vengeance or avoidance depending on the type of wrongdoing, but only when perceptions of procedural justice climate were high.

  16. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  17. Just tell me what to do: bringing back experimenter control in active contingency tasks with the command-performance procedure and finding cue density effects along the way.

    PubMed

    Hannah, Samuel D; Beneteau, Jennifer L

    2009-03-01

    Active contingency tasks, such as those used to explore judgments of control, suffer from variability in the actual values of critical variables. The authors debut a new, easily implemented procedure that restores control over these variables to the experimenter simply by telling participants when to respond, and when to withhold responding. This command-performance procedure not only restores control over critical variables such as actual contingency, it also allows response frequency to be manipulated independently of contingency or outcome frequency. This yields the first demonstration, to our knowledge, of the equivalent of a cue density effect in an active contingency task. Judgments of control are biased by response frequency outcome frequency, just as they are also biased by outcome frequency. (c) 2009 APA, all rights reserved

  18. Synthesis, characterisation and analytical application of Fe₃O₄@SiO₂@polyaminoquinoline magnetic nanocomposite for the extraction and pre-concentration of Cd(II) and Pb(II) in food samples.

    PubMed

    Manoochehri, Mahboobeh; Asgharinezhad, Ali Akbar; Shekari, Nafiseh

    2015-01-01

    This work describes a novel Fe₃O₄@SiO₂@polyaminoquinoline magnetic nanocomposite and its application in the pre-concentration of Cd(II) and Pb(II) ions. The parameters affecting the pre-concentration procedure were optimised by a Box-Behnken design through response surface methodology. Three variables (extraction time, magnetic sorbent amount and pH) were selected as the main factors affecting the sorption step, while four variables (type, volume and concentration of the eluent, and elution time) were selected as main factors in the optimisation study of the elution step. Following the sorption and elution of analytes, the ions were quantified by flame atomic absorption spectrometry (FASS). The limits of detection were 0.1 and 0.7 ng ml(-1) for Cd(II) and Pb(II) ions, respectively. All the relative standard deviations were less than 7.6%. The sorption capacities of this new sorbent were 57 mg g(-)(1) for Cd(II) and 73 mg g(-1) for Pb(II). Ultimately, this nanocomposite was successfully applied to the rapid extraction of trace quantities of these heavy metal ions from seafood and agricultural samples and satisfactory results were obtained.

  19. Elastic-net regularization approaches for genome-wide association studies of rheumatoid arthritis.

    PubMed

    Cho, Seoae; Kim, Haseong; Oh, Sohee; Kim, Kyunga; Park, Taesung

    2009-12-15

    The current trend in genome-wide association studies is to identify regions where the true disease-causing genes may lie by evaluating thousands of single-nucleotide polymorphisms (SNPs) across the whole genome. However, many challenges exist in detecting disease-causing genes among the thousands of SNPs. Examples include multicollinearity and multiple testing issues, especially when a large number of correlated SNPs are simultaneously tested. Multicollinearity can often occur when predictor variables in a multiple regression model are highly correlated, and can cause imprecise estimation of association. In this study, we propose a simple stepwise procedure that identifies disease-causing SNPs simultaneously by employing elastic-net regularization, a variable selection method that allows one to address multicollinearity. At Step 1, the single-marker association analysis was conducted to screen SNPs. At Step 2, the multiple-marker association was scanned based on the elastic-net regularization. The proposed approach was applied to the rheumatoid arthritis (RA) case-control data set of Genetic Analysis Workshop 16. While the selected SNPs at the screening step are located mostly on chromosome 6, the elastic-net approach identified putative RA-related SNPs on other chromosomes in an increased proportion. For some of those putative RA-related SNPs, we identified the interactions with sex, a well known factor affecting RA susceptibility.

  20. Selection of process conditions by risk assessment for apple juice pasteurization by UV-heat treatments at moderate temperatures.

    PubMed

    Gayán, E; Torres, J A; Alvarez, I; Condón, S

    2014-02-01

    The effect of bactericidal UV-C treatments (254 nm) on Escherichia coli O157:H7 suspended in apple juice increased synergistically with temperature up to a threshold value. The optimum UV-C treatment temperature was 55 °C, yielding a 58.9% synergistic lethal effect. Under these treatment conditions, the UV-heat (UV-H55 °C) lethal variability achieving 5-log reductions had a logistic distribution (α = 37.92, β = 1.10). Using this distribution, UV-H55 °C doses to achieve the required juice safety goal with 95, 99, and 99.9% confidence were 41.17, 42.97, and 46.00 J/ml, respectively, i.e., doses higher than the 37.58 J/ml estimated by a deterministic procedure. The public health impact of these results is that the larger UV-H55 °C dose required for achieving 5-log reductions with 95, 99, and 99.9% confidence would reduce the probability of hemolytic uremic syndrome in children by 76.3, 88.6, and 96.9%, respectively. This study illustrates the importance of including the effect of data variability when selecting operational parameters for novel and conventional preservation processes to achieve high food safety standards with the desired confidence level.

  1. Optimization techniques for integrating spatial data

    USGS Publications Warehouse

    Herzfeld, U.C.; Merriam, D.F.

    1995-01-01

    Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.

  2. Volumetric calculations in an oil field: The basis method

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky, V.; Davis, J.C.

    1993-01-01

    The basis method for estimating oil reserves in place is compared to a traditional procedure that uses ordinary kriging. In the basis method, auxiliary variables that sum to the net thickness of pay are estimated by cokriging. In theory, the procedure should be more powerful because it makes full use of the cross-correlation between variables and forces the original variables to honor interval constraints. However, at least in our case study, the practical advantages of cokriging for estimating oil in place are marginal. ?? 1993.

  3. The Effect of Curriculum Sample Selection for Medical School

    ERIC Educational Resources Information Center

    de Visser, Marieke; Fluit, Cornelia; Fransen, Jaap; Latijnhouwers, Mieke; Cohen-Schotanus, Janke; Laan, Roland

    2017-01-01

    In the Netherlands, students are admitted to medical school through (1) selection, (2) direct access by high pre-university Grade Point Average (pu-GPA), (3) lottery after being rejected in the selection procedure, or (4) lottery. At Radboud University Medical Center, 2010 was the first year we selected applicants. We designed a procedure based on…

  4. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  5. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  6. 48 CFR 570.305 - Two-phase design-build selection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Two-phase design-build... for Leasehold Interests in Real Property 570.305 Two-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if you use the two-phase design-build...

  7. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  8. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  9. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  10. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  11. 28 CFR 50.14 - Guidelines on employee selection procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...

  12. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  13. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    PubMed

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-01-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.

  14. Prospective analysis of percutaneous endoscopic colostomy at a tertiary referral centre.

    PubMed

    Baraza, W; Brown, S; McAlindon, M; Hurlstone, P

    2007-11-01

    Percutaneous endoscopic colostomy (PEC) is an alternative to surgery in selected patients with recurrent sigmoid volvulus, recurrent pseudo-obstruction or severe slow-transit constipation. A percutaneous tube acts as an irrigation or decompressant channel, or as a mode of sigmoidopexy. This prospective study evaluated the safety and efficacy of this procedure at a single tertiary referral centre. Nineteen patients with recurrent sigmoid volvulus, ten with idiopathic slow-transit constipation and four with pseudo-obstruction underwent PEC. The tube was left in place indefinitely in those with recurrent sigmoid volvulus or constipation, whereas in patients with pseudo-obstruction it was left in place for a variable period of time, depending on symptoms. Thirty-five procedures were performed in 33 patients. Three patients developed peritonitis, of whom one died, and ten patients had minor complications. Symptoms resolved in 26 patients. This large prospective study has confirmed the value of PEC in the treatment of recurrent sigmoid volvulus and pseudo-obstruction in high-risk surgical patients. Copyright (c) 2007 British Journal of Surgery Society Ltd.

  15. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  16. A direct screening procedure for gravitropism mutants in Arabidopsis thaliana (L. ) Heynh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bullen, B.L.; Best, T.R.; Gregg, M.M.

    1990-06-01

    In order to isolate gravitropism mutants of Arabidopsis thaliana (L.) Heynh. var Estland for the genetic dissection of the gravitropism pathway, a direct screening procedure has been developed in which mutants are selected on the basis of their gravitropic response. Variability in hypocotyl curvature was dependent on the germination time of each seed stock, resulting in the incorrect identification of several lines as gravitropism mutants when a standard protocol for the potentiation of germination was used. When the protocol was adjusted to allow for differences in germination time, these lines were eliminated from the collection. Out of the 60,000 M2more » seedlings screened, 0.3 to 0.4% exhibited altered gravitropism. In approximately 40% of these mutant lines, only gravitropism by the root or the hypocotyl was altered, while the response of the other organ was unaffected. These data support the hypothesis that root and hypocotyl gravitropism are genetically separable.« less

  17. Peak-flow frequency estimates through 1994 for gaged streams in South Dakota

    USGS Publications Warehouse

    Burr, M.J.; Korkow, K.L.

    1996-01-01

    Annual peak-flow data are listed for 250 continuous-record and crest-stage gaging stations in South Dakota. Peak-flow frequency estimates for selected recurrence intervals ranging from 2 to 500 years are given for 234 of these 250 stations. The log-Pearson Type III procedure was used to compute the frequency relations for the 234 stations, which in 1994 included 105 active and 129 inactive stations. The log-Pearson Type III procedure is recommended by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data, 1982, "Guidelines for Determining Flood Flow Frequency."No peak-flow frequency estimates are given for 16 of the 250 stations because: (1) of extreme variability in data set; (2) more than 20 percent of years had no flow; (3) annual peak flows represent large outflow from a spring; (4) of insufficient peak-flow record subsequent to reservoir regulation; and (5) peak-flow records were combined with records from nearby stations.

  18. A survey of whitewater recreation impacts along five West Virginia rivers

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1998-01-01

    Results are reported from an assessment of whitewater river recreation impacts at river accesses and recreation sites along five West Virginia rivers: the New, Gauley, Cheat, Tygart, and Shenandoah. Procedures were developed and applied to assess resource conditions on 24 river access roads, 68 river accesses, and 151 recreation sites. The majority of river accesses and recreation sites are located on the New and Gauley rivers, which account for most of the state?s whitewater recreation use. Site conditions are variable. While some river accesses and sites are situated on resistant rocky substrates, many are poorly designed and/or located on erodible soil and sand substrates. Recreation site sizes and other areal measures of site disturbance are quite large, coincident with the large group sizes associated with commercially outfitted whitewater rafting trips. Recommendations are offered for managing river accesses and sites and whitewater visitation and the selection of indicators and standards as part of a Limits of Acceptable Change management process. Procedures and recommendations for continued visitor impact monitoring are also offered.

  19. Users guide for the Water Resources Division bibliographic retrieval and report generation system

    USGS Publications Warehouse

    Tamberg, Nora

    1983-01-01

    The WRDBIB Retrieval and Report-generation system has been developed by applying Multitrieve (CSD 1980, Reston) software to bibliographic data files. The WRDBIB data base includes some 9 ,000 records containing bibliographic citations and descriptors of WRD reports released for publication during 1968-1982. The data base is resident in the Reston Multics computer and may be accessed by registered Multics users in the field. The WRDBIB Users Guide provides detailed procedures on how to run retrieval programs using WRDBIB library files, and how to prepare custom bibliographic reports and author indexes. Users may search the WRDBIB data base on the following variable fields as described in the Data Dictionary: Authors, organizational source, title, citation, publication year, descriptors, and the WRSIC (accession) number. The Users Guide provides ample examples of program runs illustrating various retrieval and report generation aspects. Appendices include Multics access and file manipulation procedures; a ' Glossary of Selected Terms'; and a complete ' Retrieval Session ' with step-by-step outlines. (USGS)

  20. 41 CFR 60-3.2 - Scope.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.2 Scope. A. Application of... tests and other selection procedures which are used as a basis for any employment decision. Employment... certification may be covered by Federal equal employment opportunity law. Other selection decisions, such as...

Top