Science.gov

Sample records for aic model selection

  1. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    SciTech Connect

    Glosup, J.G.; Axelrod M.C.

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  2. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    SciTech Connect

    Glosup, J.G.; Axelrod, M.C.

    1994-08-12

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method. The problem involves a probability model for underwater noise due to distant shipping.

  3. The T cell-selective IL-2 mutant AIC284 mediates protection in a rat model of Multiple Sclerosis.

    PubMed

    Weishaupt, Andreas; Paulsen, Daniela; Werner, Sandra; Wolf, Nelli; Köllner, Gabriele; Rübsamen-Schaeff, Helga; Hünig, Thomas; Kerkau, Thomas; Beyersdorf, Niklas

    2015-05-15

    Targeting regulatory T cells (Treg cells) with interleukin-2 (IL-2) constitutes a novel therapeutic approach for autoimmunity. As anti-cancer therapy with IL-2 has revealed substantial toxicities a mutated human IL-2 molecule, termed AIC284 (formerly BAY 50-4798), has been developed to reduce these side effects. To assess whether AIC284 is efficacious in autoimmunity, we studied its therapeutic potential in an animal model for Multiple Sclerosis. Treatment of Lewis rats with AIC284 increased Treg cell numbers and protected the rats from Experimental Autoimmune Encephalomyelitis (EAE). AIC284 might, thus, also efficiently prevent progression of autoimmune diseases in humans. PMID:25903730

  4. Model selection and psychological theory: A discussion of the differences between the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC)

    PubMed Central

    Vrieze, Scott I.

    2012-01-01

    This article reviews the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models given their growing use in theory testing and construction. We discuss theoretical statistical results in regression and illustrate more important issues with novel simulations involving latent variable models including factor analysis, latent profile analysis, and factor mixture models. Asymptotically, the BIC is consistent, in that it will select the true model if, among other assumptions, the true model is among the candidate models considered. The AIC is not consistent under these circumstances. When the true model is not in the candidate model set the AIC is effcient, in that it will asymptotically choose whichever model minimizes the mean squared error of prediction/estimation. The BIC is not effcient under these circumstances. Unlike the BIC, the AIC also has a minimax property, in that it can minimize the maximum possible risk in finite sample sizes. In sum, the AIC and BIC have quite different properties that require different assumptions, and applied researchers and methodologists alike will benefit from improved understanding of the asymptotic and finite-sample behavior of these criteria. The ultimate decision to use AIC or BIC depends on many factors, including: the loss function employed, the study's methodological design, the substantive research question, and the notion of a true model and its applicability to the study at hand. PMID:22309957

  5. Model Selection and Psychological Theory: A Discussion of the Differences between the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC)

    ERIC Educational Resources Information Center

    Vrieze, Scott I.

    2012-01-01

    This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…

  6. AIC, BIC, Bayesian evidence against the interacting dark energy model

    NASA Astrophysics Data System (ADS)

    Szydłowski, Marek; Krawiec, Adam; Kurek, Aleksandra; Kamionka, Michał

    2015-01-01

    Recent astronomical observations have indicated that the Universe is in a phase of accelerated expansion. While there are many cosmological models which try to explain this phenomenon, we focus on the interacting CDM model where an interaction between the dark energy and dark matter sectors takes place. This model is compared to its simpler alternative—the CDM model. To choose between these models the likelihood ratio test was applied as well as the model comparison methods (employing Occam's principle): the Akaike information criterion (AIC), the Bayesian information criterion (BIC) and the Bayesian evidence. Using the current astronomical data: type Ia supernova (Union2.1), , baryon acoustic oscillation, the Alcock-Paczynski test, and the cosmic microwave background data, we evaluated both models. The analyses based on the AIC indicated that there is less support for the interacting CDM model when compared to the CDM model, while those based on the BIC indicated that there is strong evidence against it in favor of the CDM model. Given the weak or almost non-existing support for the interacting CDM model and bearing in mind Occam's razor we are inclined to reject this model.

  7. AIC649 Induces a Bi-Phasic Treatment Response in the Woodchuck Model of Chronic Hepatitis B

    PubMed Central

    Paulsen, Daniela; Weber, Olaf; Ruebsamen-Schaeff, Helga; Tennant, Bud C.; Menne, Stephan

    2015-01-01

    AIC649 has been shown to directly address the antigen presenting cell arm of the host immune defense leading to a regulated cytokine release and activation of T cell responses. In the present study we analyzed the antiviral efficacy of AIC649 as well as its potential to induce functional cure in animal models for chronic hepatitis B. Hepatitis B virus transgenic mice and chronically woodchuck hepatitis virus (WHV) infected woodchucks were treated with AIC649, respectively. In the mouse system AIC649 decreased the hepatitis B virus titer as effective as the “gold standard”, Tenofovir. Interestingly, AIC649-treated chronically WHV infected woodchucks displayed a bi-phasic pattern of response: The marker for functional cure—hepatitis surface antigen—first increased but subsequently decreased even after cessation of treatment to significantly reduced levels. We hypothesize that the observed bi-phasic response pattern to AIC649 treatment reflects a physiologically “concerted”, reconstituted immune response against WHV and therefore may indicate a potential for inducing functional cure in HBV-infected patients. PMID:26656974

  8. AIC649 Induces a Bi-Phasic Treatment Response in the Woodchuck Model of Chronic Hepatitis B.

    PubMed

    Paulsen, Daniela; Weber, Olaf; Ruebsamen-Schaeff, Helga; Tennant, Bud C; Menne, Stephan

    2015-01-01

    AIC649 has been shown to directly address the antigen presenting cell arm of the host immune defense leading to a regulated cytokine release and activation of T cell responses. In the present study we analyzed the antiviral efficacy of AIC649 as well as its potential to induce functional cure in animal models for chronic hepatitis B. Hepatitis B virus transgenic mice and chronically woodchuck hepatitis virus (WHV) infected woodchucks were treated with AIC649, respectively. In the mouse system AIC649 decreased the hepatitis B virus titer as effective as the "gold standard", Tenofovir. Interestingly, AIC649-treated chronically WHV infected woodchucks displayed a bi-phasic pattern of response: The marker for functional cure--hepatitis surface antigen--first increased but subsequently decreased even after cessation of treatment to significantly reduced levels. We hypothesize that the observed bi-phasic response pattern to AIC649 treatment reflects a physiologically "concerted", reconstituted immune response against WHV and therefore may indicate a potential for inducing functional cure in HBV-infected patients. PMID:26656974

  9. The role of multicollinearity in landslide susceptibility assessment by means of Binary Logistic Regression: comparison between VIF and AIC stepwise selection

    NASA Astrophysics Data System (ADS)

    Cama, Mariaelena; Cristi Nicu, Ionut; Conoscenti, Christian; Quénéhervé, Geraldine; Maerker, Michael

    2016-04-01

    Landslide susceptibility can be defined as the likelihood of a landslide occurring in a given area on the basis of local terrain conditions. In the last decades many research focused on its evaluation by means of stochastic approaches under the assumption that 'the past is the key to the future' which means that if a model is able to reproduce a known landslide spatial distribution, it will be able to predict the future locations of new (i.e. unknown) slope failures. Among the various stochastic approaches, Binary Logistic Regression (BLR) is one of the most used because it calculates the susceptibility in probabilistic terms and its results are easily interpretable from a geomorphological point of view. However, very often not much importance is given to multicollinearity assessment whose effect is that the coefficient estimates are unstable, with opposite sign and therefore difficult to interpret. Therefore, it should be evaluated every time in order to make a model whose results are geomorphologically correct. In this study the effects of multicollinearity in the predictive performance and robustness of landslide susceptibility models are analyzed. In particular, the multicollinearity is estimated by means of Variation Inflation Index (VIF) which is also used as selection criterion for the independent variables (VIF Stepwise Selection) and compared to the more commonly used AIC Stepwise Selection. The robustness of the results is evaluated through 100 replicates of the dataset. The study area selected to perform this analysis is the Moldavian Plateau where landslides are among the most frequent geomorphological processes. This area has an increasing trend of urbanization and a very high potential regarding the cultural heritage, being the place of discovery of the largest settlement belonging to the Cucuteni Culture from Eastern Europe (that led to the development of the great complex Cucuteni-Tripyllia). Therefore, identifying the areas susceptible to

  10. Model Selection for Geostatistical Models

    SciTech Connect

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  11. Perceived challenges and attitudes to regimen and product selection from Italian haemophilia treaters: the 2013 AICE survey.

    PubMed

    Franchini, M; Coppola, A; Rocino, A; Zanon, E; Morfini, M; Accorsi, Arianna; Aru, Anna Brigida; Biasoli, Chiara; Cantori, Isabella; Castaman, Giancarlo; Cesaro, Simone; Ciabatta, Carlo; De Cristofaro, Raimondo; Delios, Grazia; Di Minno, Giovanni; D'Incà, Marco; Dragani, Alfredo; Ettorre, Cosimo Pietro; Gagliano, Fabio; Gamba, Gabriella; Gandini, Giorgio; Giordano, Paola; Giuffrida, Gaetano; Gresele, Paolo; Latella, Caterina; Luciani, Matteo; Margaglione, Maurizio; Marietta, Marco; Mazzucconi, Maria Gabriella; Messina, Maria; Molinari, Angelo Claudio; Notarangelo, Lucia Dora; Oliovecchio, Emily; Peyvandi, Flora; Piseddu, Gavino; Rossetti, Gina; Rossi, Vincenza; Santagostino, Elena; Schiavoni, Mario; Schinco, Piercarla; Serino, Maria Luisa; Tagliaferri, Annarita; Testa, Sophie

    2014-03-01

    Despite great advances in haemophilia care in the last 20 years, a number of questions on haemophilia therapy remain unanswered. These debated issues primarily involve the choice of the product type (plasma-derived vs. recombinant) for patients with different characteristics: specifically, if they were infected by blood-borne virus infections, and if they bear high or low risk of inhibitor development. In addition, the most appropriate treatment regimen in non-inhibitor and inhibitor patients compel physicians operating at the haemophilia treatment centres (HTCs) to take important therapeutic decisions, which are often based on their personal clinical experience rather than on evidence-based recommendations from published literature data. To know the opinion on the most controversial aspects in haemophilia care of Italian expert physicians, who are responsible for common clinical practice and therapeutic decisions, we have conducted a survey among the Directors of HTCs affiliated to the Italian Association of Haemophilia Centres (AICE). A questionnaire, consisting of 19 questions covering the most important topics related to haemophilia treatment, was sent to the Directors of all 52 Italian HTCs. Forty Directors out of 52 (76.9%) responded, accounting for the large majority of HTCs affiliated to the AICE throughout Italy. The results of this survey provide for the first time a picture of the attitudes towards clotting factor concentrate use and product selection of clinicians working at Italian HTCs. PMID:24533954

  12. The characteristic of correspondence analysis estimator to estimate latent variable model method using high-dimensional AIC

    NASA Astrophysics Data System (ADS)

    Bambang Avip Priatna, M.; Lukman, Sumiaty, Encum

    2016-02-01

    This paper aims to determine the properties of Correspondence Analysis (CA) estimator to estimate latent variable models. The method used is the High-Dimensional AIC (HAIC) method with simulation of Bernoulli distribution data. Stages are: (1) determine the matrix CA; (2) create a model of the CA estimator to estimate the latent variables by using HAIC; (3) simulated the Bernoulli distribution data with repetition 1,000,748 times. The simulation results show the CA estimator models work well.

  13. Model Selection Information Criteria for Non-Nested Latent Class Models.

    ERIC Educational Resources Information Center

    Lin, Ting Hsiang; Dayton, C. Mitchell

    1997-01-01

    The use of these three model selection information criteria for latent class models was studied for nonnested models: (1) Akaike's information criterion (H. Akaike, 1973) (AIC); (2) the Schwarz information (G. Schwarz, 1978) (SIC) criterion; and (3) the Bozdogan version of the AIC (CAIC) (H. Bozdogan, 1987). Situations in which each is preferable…

  14. Calu-3 model under AIC and LCC conditions and application for protein permeability studies.

    PubMed

    Marušić, Maja; Djurdjevič, Ida; Drašlar, Kazimir; Caserman, Simon

    2014-01-01

    Broad area of respiratory epithelium with mild surface conditions is an attractive possibility when trans-mucosal delivery of protein drugs is considered. A mucus and cellular barrier of respiratory epithelium can be modelled in vitro by Calu-3 cell line. We have monitored morphology and barrier properties of Calu-3 culture on permeable supports while developing into liquid covered or air interfaced and mucus lined cellular barrier. Besides morphological differences, cultures differed in electrical resistance and permeability to proteins as well. The accelerated permeability to proteins in these models, due to permeability modulator MP C16, was examined. The effect on electrical resistance of cellular layer was rapid in both cultures suggesting easy access of MP C16 to cells even though its overall impact on cell permeability was strongly reduced in mucus covered culture. Differences in properties of the two models enable better understanding of protein transmucosal permeability, suggesting route of transport and MP C16 modulator action. PMID:24664333

  15. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  16. Comparison of six statistical approaches in the selection of appropriate fish growth models

    NASA Astrophysics Data System (ADS)

    Zhu, Lixin; Li, Lifang; Liang, Zhenlin

    2009-09-01

    The performance of six statistical approaches, which can be used for selection of the best model to describe the growth of individual fish, was analyzed using simulated and real length-at-age data. The six approaches include coefficient of determination ( R 2), adjusted coefficient of determination (adj.- R 2), root mean squared error (RMSE), Akaike’s information criterion (AIC), bias correction of AIC (AIC c ) and Bayesian information criterion (BIC). The simulation data were generated by five growth models with different numbers of parameters. Four sets of real data were taken from the literature. The parameters in each of the five growth models were estimated using the maximum likelihood method under the assumption of the additive error structure for the data. The best supported model by the data was identified using each of the six approaches. The results show that R 2 and RMSE have the same properties and perform worst. The sample size has an effect on the performance of adj.- R 2, AIC, AIC c and BIC. Adj.- R 2 does better in small samples than in large samples. AIC is not suitable to use in small samples and tends to select more complex model when the sample size becomes large. AIC c and BIC have best performance in small and large sample cases, respectively. Use of AIC c or BIC is recommended for selection of fish growth model according to the size of the length-at-age data.

  17. Model selection for multi-component frailty models.

    PubMed

    Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert

    2007-11-20

    Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects. PMID:17476647

  18. Information-theoretic model selection and model averaging for closed-population capture-recapture studies

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1998-01-01

    Specification of an appropriate model is critical to valid stalistical inference. Given the "true model" for the data is unknown, the goal of model selection is to select a plausible approximating model that balances model bias and sampling variance. Model selection based on information criteria such as AIC or its variant AICc, or criteria like CAIC, has proven useful in a variety of contexts including the analysis of open-population capture-recapture data. These criteria have not been intensively evaluated for closed-population capture-recapture models, which are integer parameter models used to estimate population size (N), and there is concern that they will not perform well. To address this concern, we evaluated AIC, AICc, and CAIC model selection for closed-population capture-recapture models by empirically assessing the quality of inference for the population size parameter N. We found that AIC-, AICc-, and CAIC-selected models had smaller relative mean squared errors than randomly selected models, but that confidence interval coverage on N was poor unless unconditional variance estimates (which incorporate model uncertainty) were used to compute confidence intervals. Overall, AIC and AICc outperformed CAIC, and are preferred to CAIC for selection among the closed-population capture-recapture models we investigated. A model averaging approach to estimation, using AIC. AICc, or CAIC to estimate weights, was also investigated and proved superior to estimation using AIC-, AICc-, or CAIC-selected models. Our results suggested that, for model averaging, AIC or AICc. should be favored over CAIC for estimating weights.

  19. Dynamic microphones M-87/AIC and M-101/AIC and earphone H-143/AIC. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Reiff, F. H.

    1975-01-01

    The electrical characteristics of the M-87/AIC and M-101/AIC dynamic microphone and H-143 earphones were tested for the purpose of establishing the relative performance levels of units supplied by four vendors. The microphones and earphones were tested for frequency response, sensitivity, linearity, impedance and noise cancellation. Test results are presented and discussed.

  20. Information criteria and selection of vibration models.

    PubMed

    Ruzek, Michal; Guyader, Jean-Louis; Pézerat, Charles

    2014-12-01

    This paper presents a method of determining an appropriate equation of motion of two-dimensional plane structures like membranes and plates from vibration response measurements. The local steady-state vibration field is used as input for the inverse problem that approximately determines the dispersion curve of the structure. This dispersion curve is then statistically treated with Akaike information criterion (AIC), which compares the experimentally measured curve to several candidate models (equations of motion). The model with the lowest AIC value is then chosen, and the utility of other models can also be assessed. This method is applied to three experimental case studies: A red cedar wood plate for musical instruments, a thick paper subjected to unknown membrane tension, and a thick composite sandwich panel. These three cases give three different situations of a model selection. PMID:25480053

  1. Parameter recovery and model selection in mixed Rasch models.

    PubMed

    Preinerstorfer, David; Formann, Anton K

    2012-05-01

    This study examines the precision of conditional maximum likelihood estimates and the quality of model selection methods based on information criteria (AIC and BIC) in mixed Rasch models. The design of the Monte Carlo simulation study included four test lengths (10, 15, 25, 40), three sample sizes (500, 1000, 2500), two simulated mixture conditions (one and two groups), and population homogeneity (equally sized subgroups) or heterogeneity (one subgroup three times larger than the other). The results show that both increasing sample size and increasing number of items lead to higher accuracy; medium-range parameters were estimated more precisely than extreme ones; and the accuracy was higher in homogeneous populations. The minimum-BIC method leads to almost perfect results and is more reliable than AIC-based model selection. The results are compared to findings by Li, Cohen, Kim, and Cho (2009) and practical guidelines are provided. PMID:21675964

  2. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    USGS Publications Warehouse

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  3. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  4. The Development of the Extended Adolescent Injury Checklist (E-AIC): A Measure for Injury Prevention Program Evaluation

    ERIC Educational Resources Information Center

    Chapman, Rebekah; Buckley, Lisa; Sheehan, Mary

    2011-01-01

    The Extended Adolescent Injury Checklist (E-AIC), a self-report measure of injury based on the model of the Adolescent Injury Checklist (AIC), was developed for use in the evaluation of school-based interventions. The three stages of this development involved focus groups with adolescents and consultations with medical staff, pilot testing of the…

  5. Akaike information criterion to select well-fit resist models

    NASA Astrophysics Data System (ADS)

    Burbine, Andrew; Fryer, David; Sturtevant, John

    2015-03-01

    In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.

  6. Autonomic Intelligent Cyber Sensor (AICS) Version 1.0.1

    SciTech Connect

    2015-03-01

    The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transported internally and externally on a standards based, flexible two-level communication structure.

  7. Autonomic Intelligent Cyber Sensor (AICS) Version 1.0.1

    2015-03-01

    The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transportedmore » internally and externally on a standards based, flexible two-level communication structure.« less

  8. Model weights and the foundations of multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2006-01-01

    Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.

  9. Towards a Model Selection Rule for Quantum State Tomography

    NASA Astrophysics Data System (ADS)

    Scholten, Travis; Blume-Kohout, Robin

    Quantum tomography on large and/or complex systems will rely heavily on model selection techniques, which permit on-the-fly selection of small efficient statistical models (e.g. small Hilbert spaces) that accurately fit the data. Many model selection tools, such as hypothesis testing or Akaike's AIC, rely implicitly or explicitly on the Wilks Theorem, which predicts the behavior of the loglikelihood ratio statistic (LLRS) used to choose between models. We used Monte Carlo simulations to study the behavior of the LLRS in quantum state tomography, and found that it disagrees dramatically with Wilks' prediction. We propose a simple explanation for this behavior; namely, that boundaries (in state space and between models) play a significant role in determining the distribution of the LLRS. The resulting distribution is very complex, depending strongly both on the true state and the nature of the data. We consider a simplified model that neglects anistropy in the Fisher information, derive an almost analytic prediction for the mean value of the LLRS, and compare it to numerical experiments. While our simplified model outperforms the Wilks Theorem, it still does not predict the LLRS accurately, implying that alternative methods may be necessary for tomographic model selection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.

  10. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  11. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  12. Mission science value-cost savings from the Advanced Imaging Communication System (AICS)

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1984-01-01

    An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.

  13. Double point source W-phase inversion: Real-time implementation and automated model selection

    NASA Astrophysics Data System (ADS)

    Nealy, Jennifer L.; Hayes, Gavin P.

    2015-12-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  14. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  15. Tightening the Noose on LMXB Formation of MSPs: Need for AIC ?

    NASA Astrophysics Data System (ADS)

    Grindlay, J. E.; Yi, I.

    1997-12-01

    The origin of millisecond pulsars (MSPs) remains an outstanding problem despite the early and considerable evidence that they are the descendents of neutron stars spun up by accretion in low mass x-ray binaries (LMXBs). The route to MSPs from LMXBs may pass through the high luminosity Z-source LMXBs but is (severely) limited by the very limited population (and apparent birth rate) of Z-sources available. The more numerous x-ray bursters, the Atoll sources, are likely to (still) be short in numbers or birth rate but are now also found to be likely inefficient in the spin-up torques they can provide: the accretion in these relatively low accretion rate systems is likely dominated by an advection dominated flow in which matter accretes onto the NS via sub-Keplerian flows which then transfer correspondingly less angular momentum to the NS. We investigate the implications of the possible ADAF flows in low luminosity NS-LMXBs and find it is unlikely they can produce MSPs. The standard model can still be allowed if most NS-LMXBs are quiescent and undergo transient-like outbursts similar to the soft x-ray transients (which mostly contain black holes). However, apart from Cen X-4 and Aql X-1, few such systems have been found and the SXTs appear instead to be significantly deficient in NS systems. Direct production of MSPs by the accretion induced collapse (AIC) of white dwarfs has been previously suggested to solve the MSP vs. LMXB birth rate problem. We re-examine AIC models in light of the new constraints on direct LMXB production and the additional difficulty imposed by ADAF flows and constraints on SXT populations and derive constraints on the progenitor WD spin and magnetic fields.

  16. Selecting among competing models of electro-optic, infrared camera system range performance

    USGS Publications Warehouse

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  17. Individual Influence on Model Selection

    ERIC Educational Resources Information Center

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  18. AN/AIC-22(V) Intercommunications Set (ICS) fiber optic link engineering analysis report

    NASA Astrophysics Data System (ADS)

    Minter, Richard; Blocksom, Roland; Ling, Christopher

    1990-08-01

    Electromagnetic interference (EMI) problems constitute a serious threat to operational Navy aircraft systems. The application of fiber optic technology is a potential solution to these problems. EMI reported problems in the P-3 patrol aircraft AN/AIC-22(V) Intercommunications System (ICS) were selected from an EMI problem database for investigation and possible application of fiber optic technology. A proof-of-concept experiment was performed to demonstrate the level of EMI immunity of fiber optics when used in an ICS. A full duplex single channel fiber optic audio link was designed and assembled from modified government furnished equipment (GFE) previously used in another Navy fiber optic application. The link was taken to the Naval Air Test Center (NATC) Patuxent River, Maryland and temporarily installed in a Naval Research Laboratory (NRL) P-3A aircraft for a side-by-side comparison test with the installed ICS. With regards to noise reduction, the fiber optic link provided a qualitative improvement over the conventional ICS. In an effort to obtain a quantitative measure of comparison, audio frequency range both with and without operation of the aircraft VHF and UHF radio transmitters.

  19. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  20. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  1. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively

  2. Modeling Natural Selection

    ERIC Educational Resources Information Center

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  3. Test procedures, AN/AIC-27 system and component units. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Reiff, F. H.

    1975-01-01

    The AN/AIC-27 (v) intercommunication system is a 30-channel audio distribution which consists of: air crew station units, maintenance station units, and a central control unit. A test procedure for each of the above units and also a test procedure for the system are presented. The intent of the test is to provide data for use in shuttle audio subsystem design.

  4. Regularization Parameter Selections via Generalized Information Criterion

    PubMed Central

    Zhang, Yiyun; Li, Runze; Tsai, Chih-Ling

    2009-01-01

    We apply the nonconcave penalized likelihood approach to obtain variable selections as well as shrinkage estimators. This approach relies heavily on the choice of regularization parameter, which controls the model complexity. In this paper, we propose employing the generalized information criterion (GIC), encompassing the commonly used Akaike information criterion (AIC) and Bayesian information criterion (BIC), for selecting the regularization parameter. Our proposal makes a connection between the classical variable selection criteria and the regularization parameter selections for the nonconcave penalized likelihood approaches. We show that the BIC-type selector enables identification of the true model consistently, and the resulting estimator possesses the oracle property in the terminology of Fan and Li (2001). In contrast, however, the AIC-type selector tends to overfit with positive probability. We further show that the AIC-type selector is asymptotically loss efficient, while the BIC-type selector is not. Our simulation results confirm these theoretical findings, and an empirical example is presented. Some technical proofs are given in the online supplementary material. PMID:20676354

  5. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  6. Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging

    SciTech Connect

    Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.

    2006-03-31

    One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.

  7. Selective Constraints on Amino Acids Estimated by a Mechanistic Codon Substitution Model with Multiple Nucleotide Changes

    PubMed Central

    Miyazawa, Sanzo

    2011-01-01

    Background Empirical substitution matrices represent the average tendencies of substitutions over various protein families by sacrificing gene-level resolution. We develop a codon-based model, in which mutational tendencies of codon, a genetic code, and the strength of selective constraints against amino acid replacements can be tailored to a given gene. First, selective constraints averaged over proteins are estimated by maximizing the likelihood of each 1-PAM matrix of empirical amino acid (JTT, WAG, and LG) and codon (KHG) substitution matrices. Then, selective constraints specific to given proteins are approximated as a linear function of those estimated from the empirical substitution matrices. Results Akaike information criterion (AIC) values indicate that a model allowing multiple nucleotide changes fits the empirical substitution matrices significantly better. Also, the ML estimates of transition-transversion bias obtained from these empirical matrices are not so large as previously estimated. The selective constraints are characteristic of proteins rather than species. However, their relative strengths among amino acid pairs can be approximated not to depend very much on protein families but amino acid pairs, because the present model, in which selective constraints are approximated to be a linear function of those estimated from the JTT/WAG/LG/KHG matrices, can provide a good fit to other empirical substitution matrices including cpREV for chloroplast proteins and mtREV for vertebrate mitochondrial proteins. Conclusions/Significance The present codon-based model with the ML estimates of selective constraints and with adjustable mutation rates of nucleotide would be useful as a simple substitution model in ML and Bayesian inferences of molecular phylogenetic trees, and enables us to obtain biologically meaningful information at both nucleotide and amino acid levels from codon and protein sequences. PMID:21445250

  8. Multidimensional Rasch Model Information-Based Fit Index Accuracy

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Wolfe, Edward W.

    2013-01-01

    Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…

  9. Entropic criterion for model selection

    NASA Astrophysics Data System (ADS)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  10. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data

    PubMed Central

    Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172

  11. Selected Tether Applications Cost Model

    NASA Technical Reports Server (NTRS)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  12. Variable selection with stepwise and best subset approaches

    PubMed Central

    2016-01-01

    While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786

  13. Model selection for modified gravity.

    PubMed

    Kitching, T D; Simpson, F; Heavens, A F; Taylor, A N

    2011-12-28

    In this article, we review model selection predictions for modified gravity scenarios as an explanation for the observed acceleration of the expansion history of the Universe. We present analytical procedures for calculating expected Bayesian evidence values in two cases: (i) that modified gravity is a simple parametrized extension of general relativity (GR; two nested models), such that a Bayes' factor can be calculated, and (ii) that we have a class of non-nested models where a rank-ordering of evidence values is required. We show that, in the case of a minimal modified gravity parametrization, we can expect large area photometric and spectroscopic surveys, using three-dimensional cosmic shear and baryonic acoustic oscillations, to 'decisively' distinguish modified gravity models over GR (or vice versa), with odds of ≫1:100. It is apparent that the potential discovery space for modified gravity models is large, even in a simple extension to gravity models, where Newton's constant G is allowed to vary as a function of time and length scale. On the time and length scales where dark energy dominates, it is only through large-scale cosmological experiments that we can hope to understand the nature of gravity. PMID:22084296

  14. Perturbation of energy metabolism by fatty-acid derivative AIC-47 and imatinib in BCR-ABL-harboring leukemic cells.

    PubMed

    Shinohara, Haruka; Kumazaki, Minami; Minami, Yosuke; Ito, Yuko; Sugito, Nobuhiko; Kuranaga, Yuki; Taniguchi, Kohei; Yamada, Nami; Otsuki, Yoshinori; Naoe, Tomoki; Akao, Yukihiro

    2016-02-01

    In Ph-positive leukemia, imatinib brought marked clinical improvement; however, further improvement is needed to prevent relapse. Cancer cells efficiently use limited energy sources, and drugs targeting cellular metabolism improve the efficacy of therapy. In this study, we characterized the effects of novel anti-cancer fatty-acid derivative AIC-47 and imatinib, focusing on cancer-specific energy metabolism in chronic myeloid leukemia cells. AIC-47 and imatinib in combination exhibited a significant synergic cytotoxicity. Imatinib inhibited only the phosphorylation of BCR-ABL; whereas AIC-47 suppressed the expression of the protein itself. Both AIC-47 and imatinib modulated the expression of pyruvate kinase M (PKM) isoforms from PKM2 to PKM1 through the down-regulation of polypyrimidine tract-binding protein 1 (PTBP1). PTBP1 functions as alternative splicing repressor of PKM1, resulting in expression of PKM2, which is an inactive form of pyruvate kinase for the last step of glycolysis. Although inactivation of BCR-ABL by imatinib strongly suppressed glycolysis, compensatory fatty-acid oxidation (FAO) activation supported glucose-independent cell survival by up-regulating CPT1C, the rate-limiting FAO enzyme. In contrast, AIC-47 inhibited the expression of CPT1C and directly fatty-acid metabolism. These findings were also observed in the CD34(+) fraction of Ph-positive acute lymphoblastic leukemia cells. These results suggest that AIC-47 in combination with imatinib strengthened the attack on cancer energy metabolism, in terms of both glycolysis and compensatory activation of FAO. PMID:26607903

  15. A Logistic Regression Model for Personnel Selection.

    ERIC Educational Resources Information Center

    Raju, Nambury S.; And Others

    1991-01-01

    A two-parameter logistic regression model for personnel selection is proposed. The model was tested with a database of 84,808 military enlistees. The probability of job success was related directly to trait levels, addressing such topics as selection, validity generalization, employee classification, selection bias, and utility-based fair…

  16. IRT Model Selection Methods for Dichotomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.

    2007-01-01

    Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…

  17. Model Selection Indices for Polytomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung

    2009-01-01

    This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…

  18. The Coalescent Process in Models with Selection

    PubMed Central

    Kaplan, N. L.; Darden, T.; Hudson, R. R.

    1988-01-01

    Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685

  19. Model selection for anomaly detection

    NASA Astrophysics Data System (ADS)

    Burnaev, E.; Erofeev, P.; Smolyakov, D.

    2015-12-01

    Anomaly detection based on one-class classification algorithms is broadly used in many applied domains like image processing (e.g. detection of whether a patient is "cancerous" or "healthy" from mammography image), network intrusion detection, etc. Performance of an anomaly detection algorithm crucially depends on a kernel, used to measure similarity in a feature space. The standard approaches (e.g. cross-validation) for kernel selection, used in two-class classification problems, can not be used directly due to the specific nature of a data (absence of a second, abnormal, class data). In this paper we generalize several kernel selection methods from binary-class case to the case of one-class classification and perform extensive comparison of these approaches using both synthetic and real-world data.

  20. An Economic Model for Selective Admissions

    ERIC Educational Resources Information Center

    Haglund, Alma

    1978-01-01

    The author presents an economic model for selective admissions to postsecondary nursing programs. Primary determinants of the admissions model are employment needs, availability of educational resources, and personal resources (ability and learning potential). As there are more applicants than resources, selective admission practices are…

  1. Modelling the growth of tambaqui, Colossoma macropomum (Cuvier, 1816) in floodplain lakes: model selection and multimodel inference.

    PubMed

    Costa, L R F; Barthem, R B; Albernaz, A L; Bittencourt, M M; Villacorta-Corrêa, M A

    2013-05-01

    The tambaqui, Colossoma macropomum, is one of the most commercially valuable Amazonian fish species, and in the floodplains of the region, they are caught in both rivers and lakes. Most growth studies on this species to date have adjusted only one growth model, the von Bertalanffy, without considering its possible uncertainties. In this study, four different models (von Bertalanffy, Logistic, Gompertz and the general model of Schnüte-Richards) were adjusted to a data set of fish caught within lakes from the middle Solimões River. These models were adjusted by non-linear equations, using the sample size of each age class as its weight. The adjustment evaluation of each model was based on the Akaike Information Criterion (AIC), the variation of AIC between the models (Δi) and the evidence weights (wi). Both the Logistic (Δi = 0.0) and Gompertz (Δi = 1.12) models were supported by the data, but neither of them was clearly superior (wi, respectively 52.44 and 29.95%). Thus, we propose the use of an averaged-model to estimate the asymptotic length (L∞). The averaged-model, based on Logistic and Gompertz models, resulted in an estimate of L∞=90.36, indicating that the tambaqui would take approximately 25 years to reach average size. PMID:23917568

  2. A Collaborative Model for Principal Selection.

    ERIC Educational Resources Information Center

    Richardson, M. D.; And Others

    Although the principal is critical for the success of the school and the school district, many school districts lack a structured and systematic means for identifying and selecting principals. This paper presents a collaborative model for principal selection, which is based on a valid job description, advertisement of the position, interview…

  3. Review and selection of unsaturated flow models

    SciTech Connect

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  4. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  5. Ion selective transistor modelling for behavioural simulations.

    PubMed

    Daniel, M; Janicki, M; Wroblewski, W; Dybko, A; Brzozka, Z; Napieralski, A

    2004-01-01

    Computer aided design and simulation of complex silicon microsystems oriented for environment monitoring requires efficient and accurate models of ion selective sensors, compatible with the existing behavioural simulators. This paper concerns sensors based on the back-side contact Ion Sensitive Field Effect Transistors (ISFETs). The ISFETs with silicon nitride gate are sensitive to hydrogen ion concentration. When the transistor gate is additionally covered with a special ion selective membrane, selectivity to other than hydrogen ions can be achieved. Such sensors are especially suitable for flow analysis of solutions containing various ions. The problem of ion selective sensor modelling is illustrated here on a practical example of an ammonium sensitive membrane. The membrane is investigated in the presence of some interfering ions and appropriate selectivity coefficients are determined. Then, the model of the whole sensor is created and used in subsequent electrical simulations. Providing that appropriate selectivity coefficients are known, the proposed model is applicable for any membrane, and can be straightforwardly implemented for behavioural simulation of water monitoring microsystems. The model has been already applied in a real on-line water pollution monitoring system for detection of various contaminants. PMID:15685987

  6. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.

  7. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.

  8. A Pragmatic Model for Instructional Technology Selection.

    ERIC Educational Resources Information Center

    Vaccare, Carmel; Sherman, Greg

    2001-01-01

    The 4S model uses the criteria "Simple, Stable, Scalable, and Sustainable" as a filter for selecting instructional technologies. This paper considers a social dimension that uses culture and interaction as the primary consideration in the deployment of any instructional technology within the context of the 4S model. (Author/MES)

  9. Melody Track Selection Using Discriminative Language Model

    NASA Astrophysics Data System (ADS)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  10. Selecting model complexity in learning problems

    SciTech Connect

    Buescher, K.L.; Kumar, P.R.

    1993-10-01

    To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.

  11. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  12. An Ss Model with Adverse Selection.

    ERIC Educational Resources Information Center

    House, Christopher L.; Leahy, John V.

    2004-01-01

    We present a model of the market for a used durable in which agents face fixed costs of adjustment, the magnitude of which depends on the degree of adverse selection in the secondary market. We find that, unlike typical models, the sS bands in our model contract as the variance of the shock increases. We also analyze a dynamic version of the model…

  13. Grid selection of models of nucleotide substitution

    PubMed Central

    Loureiro, Marta; Pan, Miguel; Rodríguez-Pascual, Manuel; Posada, David; Mayo, Rafael

    2016-01-01

    jModelTest is a Java program for the statistical selection of models of nucleotide substitution with thousands of users around the world. For large data sets, the calculations carried out by this program can be too expensive for many users. Here we describe the port of the jModeltest code for Grid computing using DRMAA. This work should facilitate the use of jModelTest on a broad scale. PMID:20543444

  14. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  15. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  16. On spatial mutation-selection models

    SciTech Connect

    Kondratiev, Yuri; Kutoviy, Oleksandr E-mail: kutovyi@mit.edu; Minlos, Robert Pirogov, Sergey

    2013-11-15

    We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.

  17. Tabu search model selection for SVM.

    PubMed

    Lebrun, Gilles; Charrier, Christophe; Lezoray, Olivier; Cardot, Hubert

    2008-02-01

    A model selection method based on tabu search is proposed to build support vector machines (binary decision functions) of reduced complexity and efficient generalization. The aim is to build a fast and efficient support vector machines classifier. A criterion is defined to evaluate the decision function quality which blends recognition rate and the complexity of a binary decision functions together. The selection of the simplification level by vector quantization, of a feature subset and of support vector machines hyperparameters are performed by tabu search method to optimize the defined decision function quality criterion in order to find a good sub-optimal model on tractable times. PMID:18344220

  18. Observability in strategic models of viability selection.

    PubMed

    Gámez, M; Carreño, R; Kósa, A; Varga, Z

    2003-10-01

    Strategic models of frequency-dependent viability selection, in terms of mathematical systems theory, are considered as a dynamic observation system. Using a general sufficient condition for observability of nonlinear systems with invariant manifold, it is studied whether, observing certain phenotypic characteristics of the population, the development of its genetic state can be recovered, at least near equilibrium. PMID:14563566

  19. Student Selection and the Special Regression Model.

    ERIC Educational Resources Information Center

    Deck, Dennis D.

    The feasibility of constructing composite scores which will yield pretest measures having all the properties required by the special regression model is explored as an alternative to the single pretest score usually used in student selection for Elementary Secondary Education Act Title I compensatory education programs. Reading data, including…

  20. A model for plant lighting system selection

    NASA Technical Reports Server (NTRS)

    Ciolkosz, D. E.; Albright, L. D.; Sager, J. C.; Langhans, R. W.

    2002-01-01

    A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.

  1. Review and selection of unsaturated flow models

    SciTech Connect

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  2. Bayesian Model Selection for Group Studies

    PubMed Central

    Stephan, Klaas Enno; Penny, Will D.; Daunizeau, Jean; Moran, Rosalyn J.; Friston, Karl J.

    2009-01-01

    Bayesian model selection (BMS) is a powerful method for determining the most likely among a set of competing hypotheses about the mechanisms that generated observed data. BMS has recently found widespread application in neuroimaging, particularly in the context of dynamic causal modelling (DCM). However, so far, combining BMS results from several subjects has relied on simple (fixed effects) metrics, e.g. the group Bayes factor (GBF), that do not account for group heterogeneity or outliers. In this paper, we compare the GBF with two random effects methods for BMS at the between-subject or group level. These methods provide inference on model-space using a classical and Bayesian perspective respectively. First, a classical (frequentist) approach uses the log model evidence as a subject-specific summary statistic. This enables one to use analysis of variance to test for differences in log-evidences over models, relative to inter-subject differences. We then consider the same problem in Bayesian terms and describe a novel hierarchical model, which is optimised to furnish a probability density on the models themselves. This new variational Bayes method rests on treating the model as a random variable and estimating the parameters of a Dirichlet distribution which describes the probabilities for all models considered. These probabilities then define a multinomial distribution over model space, allowing one to compute how likely it is that a specific model generated the data of a randomly chosen subject as well as the exceedance probability of one model being more likely than any other model. Using empirical and synthetic data, we show that optimising a conditional density of the model probabilities, given the log-evidences for each model over subjects, is more informative and appropriate than both the GBF and frequentist tests of the log-evidences. In particular, we found that the hierarchical Bayesian approach is considerably more robust than either of the other

  3. Integrative variable selection via Bayesian model uncertainty.

    PubMed

    Quintana, M A; Conti, D V

    2013-12-10

    We are interested in developing integrative approaches for variable selection problems that incorporate external knowledge on a set of predictors of interest. In particular, we have developed an integrative Bayesian model uncertainty (iBMU) method, which formally incorporates multiple sources of data via a second-stage probit model on the probability that any predictor is associated with the outcome of interest. Using simulations, we demonstrate that iBMU leads to an increase in power to detect true marginal associations over more commonly used variable selection techniques, such as least absolute shrinkage and selection operator and elastic net. In addition, iBMU leads to a more efficient model search algorithm over the basic BMU method even when the predictor-level covariates are only modestly informative. The increase in power and efficiency of our method becomes more substantial as the predictor-level covariates become more informative. Finally, we demonstrate the power and flexibility of iBMU for integrating both gene structure and functional biomarker information into a candidate gene study investigating over 50 genes in the brain reward system and their role with smoking cessation from the Pharmacogenetics of Nicotine Addiction and Treatment Consortium. PMID:23824835

  4. Aspects of model selection in multivariate analyses

    SciTech Connect

    Picard, R.

    1982-01-01

    Analysis of data sets that involve large numbers of variables usually entails some type of model fitting and data reduction. In regression problems, a fitted model that is obtained by a selection process can be difficult to evaluate because of optimism induced by the choice mechanism. Problems in areas such as discriminant analysis, calibration, and the like often lead to similar difficulties. The preceeding sections reviewed some of the general ideas behind assessment of regression-type predictors and illustrated how they can be easily incorporated into a standard data analysis.

  5. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  6. Model selection for radiochromic film dosimetry

    NASA Astrophysics Data System (ADS)

    Méndez, I.

    2015-05-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to provide better results than using Micke-Mayer perturbation models. Among the models being compared, the triple-channel model with Truncated Normal perturbations, net optical density as the response and subject to the application of lateral corrections was found to be the most accurate model. The scope of this study was circumscribed by the limits under which the models were tested. In this study, the films were irradiated with megavoltage radiotherapy beams, with doses from about 20-600 cGy, entire (8 inch  × 10 inch) films were scanned, the functional form of the sensitometric curves was a polynomial and the different lots were calibrated using the plane-based method.

  7. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.

  8. Model selection, zero-inflated models, and predictors of primate abundance in Korup National Park, Cameroon.

    PubMed

    Linder, Joshua M; Lawler, Richard R

    2012-11-01

    Determining the ecological and anthropogenic factors that shape the abundance and distribution of wild primates is a critical component of primate conservation research. Such research is complicated, however, whenever the species under study are encountered infrequently, a characteristic of many taxa that are threatened with extinction. Typically, the resulting data sets based on surveys of such species will have a high frequency of zero counts which makes it difficult to determine the predictor variables that are associated with species abundance. In this study, we test various statistical models using survey data that was gathered on seven species of primate in Korup National Park, Cameroon. Predictor variables include hunting signs and aspects of habitat structure and floristic composition. Our statistical models include zero-inflated models that are tailored to deal with a high frequency of zero counts. First, using exploratory data analysis we found the most informative set of models as ranked by Δ-AIC (Akaike's information criterion). On the basis of this analysis, we used five predictor variables to construct several regression models including Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial. Total basal area of all trees, density of secondary tree species, hunting signs, and mean basal area of all trees were significant predictors of abundance in the zero-inflated models. We discuss the statistical logic behind zero-inflated models and provide an interpretation of parameter estimates. We recommend that researchers explore a variety of models when determining the factors that correlate with primate abundance. PMID:22991216

  9. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  10. Improved modeling of GPS selective availability

    NASA Technical Reports Server (NTRS)

    Braasch, Michael S.; Fink, Annmarie; Duffus, Keith

    1994-01-01

    Selective Availability (SA) represents the dominant error source for stand-alone users of the Global Positioning System (GPS). Even for DGPS, SA mandates the update rate required for a desired level of accuracy in realtime applications. As was witnessed in the recent literature, the ability to model this error source is crucial to the proper evaluation of GPS-based systems. A variety of SA models were proposed to date; however, each has its own shortcomings. Most of these models were based on limited data sets or data which were corrupted by additional error sources. A comprehensive treatment of the problem is presented. The phenomenon of SA is discussed and a technique is presented whereby both clock and orbit components of SA are identifiable. Extensive SA data sets collected from Block 2 satellites are presented. System Identification theory then is used to derive a robust model of SA from the data. This theory also allows for the statistical analysis of SA. The stationarity of SA over time and across different satellites is analyzed and its impact on the modeling problem is discussed.

  11. Modeling selective local interactions with memory

    PubMed Central

    Galante, Amanda; Levy, Doron

    2012-01-01

    Recently we developed a stochastic particle system describing local interactions between cyanobacteria. We focused on the common freshwater cyanobacteria Synechocystis sp., which are coccoidal bacteria that utilize group dynamics to move toward a light source, a motion referred to as phototaxis. We were particularly interested in the local interactions between cells that were located in low to medium density areas away from the front. The simulations of our stochastic particle system in 2D replicated many experimentally observed phenomena, such as the formation of aggregations and the quasi-random motion of cells. In this paper, we seek to develop a better understanding of group dynamics produced by this model. To facilitate this study, we replace the stochastic model with a system of ordinary differential equations describing the evolution of particles in 1D. Unlike many other models, our emphasis is on particles that selectively choose one of their neighbors as the preferred direction of motion. Furthermore, we incorporate memory by allowing persistence in the motion. We conduct numerical simulations which allow us to efficiently explore the space of parameters, in order to study the stability, size, and merging of aggregations. PMID:24244060

  12. Transformation model selection by multiple hypotheses testing

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger

    2014-12-01

    Transformations between different geodetic reference frames are often performed such that first the transformation parameters are determined from control points. If in the first place we do not know which of the numerous transformation models is appropriate then we can set up a multiple hypotheses test. The paper extends the common method of testing transformation parameters for significance, to the case that also constraints for such parameters are tested. This provides more flexibility when setting up such a test. One can formulate a general model with a maximum number of transformation parameters and specialize it by adding constraints to those parameters, which need to be tested. The proper test statistic in a multiple test is shown to be either the extreme normalized or the extreme studentized Lagrange multiplier. They are shown to perform superior to the more intuitive test statistics derived from misclosures. It is shown how model selection by multiple hypotheses testing relates to the use of information criteria like AICc and Mallows' , which are based on an information theoretic approach. Nevertheless, whenever comparable, the results of an exemplary computation almost coincide.

  13. Entropic Priors and Bayesian Model Selection

    NASA Astrophysics Data System (ADS)

    Brewer, Brendon J.; Francis, Matthew J.

    2009-12-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian ``Occam's Razor.'' This is illustrated with a simple example involving what Jaynes called a ``sure thing'' hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative ``sure thing'' hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst cosmologists: is dark energy a cosmological constant, or has it evolved with time in some way? And how shall we decide, when the data are in?

  14. Bayesian model selection analysis of WMAP3

    SciTech Connect

    Parkinson, David; Mukherjee, Pia; Liddle, Andrew R.

    2006-06-15

    We present a Bayesian model selection analysis of WMAP3 data using our code CosmoNest. We focus on the density perturbation spectral index n{sub S} and the tensor-to-scalar ratio r, which define the plane of slow-roll inflationary models. We find that while the Bayesian evidence supports the conclusion that n{sub S}{ne}1, the data are not yet powerful enough to do so at a strong or decisive level. If tensors are assumed absent, the current odds are approximately 8 to 1 in favor of n{sub S}{ne}1 under our assumptions, when WMAP3 data is used together with external data sets. WMAP3 data on its own is unable to distinguish between the two models. Further, inclusion of r as a parameter weakens the conclusion against the Harrison-Zel'dovich case (n{sub S}=1, r=0), albeit in a prior-dependent way. In appendices we describe the CosmoNest code in detail, noting its ability to supply posterior samples as well as to accurately compute the Bayesian evidence. We make a first public release of CosmoNest, now available at www.cosmonest.org.

  15. Selecting a model of supersymmetry breaking mediation

    SciTech Connect

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-08-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the {mu} parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  16. Automatic picking based on an AR-AIC-costfunction appraoach applied on tele-, regional- and induced seismic datasets

    NASA Astrophysics Data System (ADS)

    Olbert, Kai; Meier, Thomas; Cristiano, Luigia

    2015-04-01

    A quick picking procedure is an important tool to process large datasets in seismology. Identifying phases and determining the precise onset times at seismological stations is essential not just for localization procedures but also for seismic body-wave tomography. The automated picking procedure should be fast, robust, precise and consistent. In manual processing the speed and consistency are not guaranteed and therefore unreproducible errors may be introduced, especially for large amounts of data. In this work an offline P- and S-phase picker based on an autoregressive-prediction approach is optimized and applied to different data sets. The onset time can be described as the sum of the event source time, the theoretic travel time according to a reference velocity model and a deviation from the theoretic travel time due to lateral heterogeneity or errors in the source location. With this approach the onset time at each station can be found around the theoretical travel time within a time window smaller than the maximum lateral heterogeneity. Around the theoretic travel time an autoregressive prediction error is calculated from one or several components as characteristic function of the waveform. The minimum of the Akaike-Information-Criteria of the characteristic function identifies the phase. As was shown by Küperkoch et al. (2012), the Akaike-Information-Criteria has the tendency to be too late. Therefore, an additional processing step for precise picking is needed. In the vicinity of the minimum of the Akaike-Information-Criteria a cost function is defined and used to find the optimal estimate of the arrival time. The cost function is composed of the CF and three side conditions. The idea behind the use of a cost function is to find the phase pick in the last minimum before the CF rises due to the phase onset. The final onset time is picked in the minimum of the cost function. The automatic picking procedure is applied on datasets recorded at stations of the

  17. Model Related Estimates of time dependent quantiles of peak flows - case study for selected catchments in Poland

    NASA Astrophysics Data System (ADS)

    Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay

    2016-04-01

    Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying

  18. Selective experimental review of the Standard Model

    SciTech Connect

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are ..cap alpha../sub s/, ..cap alpha../sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, M..mu.., M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta/sub 1/, theta/sub 2/, theta/sub 3/, and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant ..cap alpha../sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring ..cap alpha../sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures.

  19. 42 CFR 425.600 - Selection of risk model.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Selection of risk model. 425.600 Section 425.600... Selection of risk model. (a) For its initial agreement period, an ACO may elect to operate under one of the following tracks: (1) Track 1. Under Track 1, the ACO operates under the one-sided model (as described...

  20. 42 CFR 425.600 - Selection of risk model.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Selection of risk model. 425.600 Section 425.600... Selection of risk model. (a) For its initial agreement period, an ACO may elect to operate under one of the following tracks: (1) Track 1. Under Track 1, the ACO operates under the one-sided model (as described...

  1. 42 CFR 425.600 - Selection of risk model.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Selection of risk model. 425.600 Section 425.600... Selection of risk model. (a) For its initial agreement period, an ACO may elect to operate under one of the following tracks: (1) Track 1. Under Track 1, the ACO operates under the one-sided model (as described...

  2. Selection of Instructional Materials. A Model Policy and Rules.

    ERIC Educational Resources Information Center

    Bartlett, Larry D.; And Others

    This model prepared by the State of Iowa Department of Public Instruction is intended to provide assistance to schools in developing their own policy and procedures for the selection of library media and text materials. A brief model statement of policy is followed by a model statement of rules which includes (1) responsibility for selection of…

  3. Cognitive Niches: An Ecological Model of Strategy Selection

    ERIC Educational Resources Information Center

    Marewski, Julian N.; Schooler, Lael J.

    2011-01-01

    How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each…

  4. HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION

    EPA Science Inventory

    Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...

  5. Patterns of Neutral Diversity Under General Models of Selective Sweeps

    PubMed Central

    Coop, Graham; Ralph, Peter

    2012-01-01

    Two major sources of stochasticity in the dynamics of neutral alleles result from resampling of finite populations (genetic drift) and the random genetic background of nearby selected alleles on which the neutral alleles are found (linked selection). There is now good evidence that linked selection plays an important role in shaping polymorphism levels in a number of species. One of the best-investigated models of linked selection is the recurrent full-sweep model, in which newly arisen selected alleles fix rapidly. However, the bulk of selected alleles that sweep into the population may not be destined for rapid fixation. Here we develop a general model of recurrent selective sweeps in a coalescent framework, one that generalizes the recurrent full-sweep model to the case where selected alleles do not sweep to fixation. We show that in a large population, only the initial rapid increase of a selected allele affects the genealogy at partially linked sites, which under fairly general assumptions are unaffected by the subsequent fate of the selected allele. We also apply the theory to a simple model to investigate the impact of recurrent partial sweeps on levels of neutral diversity and find that for a given reduction in diversity, the impact of recurrent partial sweeps on the frequency spectrum at neutral sites is determined primarily by the frequencies rapidly achieved by the selected alleles. Consequently, recurrent sweeps of selected alleles to low frequencies can have a profound effect on levels of diversity but can leave the frequency spectrum relatively unperturbed. In fact, the limiting coalescent model under a high rate of sweeps to low frequency is identical to the standard neutral model. The general model of selective sweeps we describe goes some way toward providing a more flexible framework to describe genomic patterns of diversity than is currently available. PMID:22714413

  6. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    PubMed

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models. PMID:27550703

  7. Model Selection for Monitoring CO2 Plume during Sequestration

    SciTech Connect

    2014-12-31

    The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means of quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the final ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.

  8. Model Selection for Monitoring CO2 Plume during Sequestration

    2014-12-31

    The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means ofmore » quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the final ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.« less

  9. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  10. Selection of Hydrological Model for Waterborne Release

    SciTech Connect

    Blanchard, A.

    1999-02-03

    The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the DB and BDB accidents to be used in the future study.

  11. Selection of Hydrological Model for Waterborne Release

    SciTech Connect

    Blanchard, A.

    1999-04-21

    This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the Design Basis and Beyond Design Basis accidents to be used in the future study.

  12. The Multilingual Lexicon: Modelling Selection and Control

    ERIC Educational Resources Information Center

    de Bot, Kees

    2004-01-01

    In this paper an overview of research on the multilingual lexicon is presented as the basis for a model for processing multiple languages. With respect to specific issues relating to the processing of more than two languages, it is suggested that there is no need to develop a specific model for such multilingual processing, but at the same time we…

  13. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  14. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  15. Bayesian model selection for LISA pathfinder

    NASA Astrophysics Data System (ADS)

    Karnesis, Nikolaos; Nofrarias, Miquel; Sopuerta, Carlos F.; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; McNamara, Paul W.; Plagnol, Eric; Vitale, Stefano

    2014-03-01

    The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the eLISA concept. The data analysis team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment onboard the LPF. These models are used for simulations, but, more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the data analysis team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching this problem is to recover the essential parameters of a LTP model fitting the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes factor between two competing models. In our analysis, we use three main different methods to estimate it: the reversible jump Markov chain Monte Carlo method, the Schwarz criterion, and the Laplace approximation. They are applied to simulated LPF experiments in which the most probable LTP model that explains the observations is recovered. The same type of analysis presented in this paper is expected to be followed during flight operations. Moreover, the correlation of the output of the aforementioned methods with the design of the experiment is explored.

  16. Methods for model selection in applied science and engineering.

    SciTech Connect

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  17. Pathophysiological Progression Model for Selected Toxicological Endpoints

    EPA Science Inventory

    The existing continuum paradigms are effective models to organize toxicological data associated with endpoints used in human health assessments. A compendium of endpoints characterized along a pathophysiological continuum would serve to: weigh the relative importance of effects o...

  18. Deviance statistics in model fit and selection in ROC studies

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    A general non-linear regression model-based Bayesian inference approach is used in our ROC (Receiver Operating Characteristics) study. In the sampling of posterior distribution, two prior models - continuous Gaussian and discrete categorical - are used for the scale parameter. How to judge Goodness-of-Fit (GOF) of each model and how to criticize these two models, Deviance statistics and Deviance information criterion (DIC) are adopted to address these problems. Model fit and model selection focus on the adequacy of models. Judging model adequacy is essentially measuring agreement of model and observations. Deviance statistics and DIC provide overall measures on model fit and selection. In order to investigate model fit at each category of observations, we find that the cumulative, exponential contributions from individual observations to Deviance statistics are good estimates of FPF (false positive fraction) and TPF (true positive fraction) on which the ROC curve is based. This finding further leads to a new measure for model fit, called FPF-TPF distance, which is an Euclidean distance defined on FPF-TPF space. It combines both local and global fitting. Deviance statistics and FPFTPF distance are shown to be consistent and in good agreement. Theoretical derivation and numerical simulations for this new method for model fit and model selection of ROC data analysis are included. Keywords: General non-linear regression model, Bayesian Inference, Markov Chain Monte Carlo (MCMC) method, Goodness-of-Fit (GOF), Model selection, Deviance statistics, Deviance information criterion (DIC), Continuous conjugate prior, Discrete categorical prior. ∗

  19. Python Program to Select HII Region Models

    NASA Astrophysics Data System (ADS)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  20. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    PubMed

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  1. The Genealogy of Samples in Models with Selection

    PubMed Central

    Neuhauser, C.; Krone, S. M.

    1997-01-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604

  2. Modeling Selective Intergranular Oxidation of Binary Alloys

    SciTech Connect

    Xu, Zhijie; Li, Dongsheng; Schreiber, Daniel K.; Rosso, Kevin M.; Bruemmer, Stephen M.

    2015-01-07

    Intergranular attack of alloys under hydrothermal conditions is a complex problem that depends on metal and oxygen transport kinetics via solid-state and channel-like pathways to an advancing oxidation front. Experiments reveal very different rates of intergranular attack and minor element depletion distances ahead of the oxidation front for nickel-based binary alloys depending on the minor element. For example, a significant Cr depletion up to 9 µm ahead of grain boundary crack tips were documented for Ni-5Cr binary alloy, in contrast to relatively moderate Al depletion for Ni-5Al (~100s of nm). We present a mathematical kinetics model that adapts Wagner’s model for thick film growth to intergranular attack of binary alloys. The transport coefficients of elements O, Ni, Cr, and Al in bulk alloys and along grain boundaries were estimated from the literature. For planar surface oxidation, a critical concentration of the minor element can be determined from the model where the oxide of minor element becomes dominant over the major element. This generic model for simple grain boundary oxidation can predict oxidation penetration velocities and minor element depletion distances ahead of the advancing front that are comparable to experimental data. The significant distance of depletion of Cr in Ni-5Cr in contrast to the localized Al depletion in Ni-5Al can be explained by the model due to the combination of the relatively faster diffusion of Cr along the grain boundary and slower diffusion in bulk grains, relative to Al.

  3. Selection of Hydrological Model for Waterborne Release

    SciTech Connect

    Blanchard, A.

    1999-04-21

    Following a request from the States of South Carolina and Georgia, downstream radiological consequences from postulated accidental aqueous releases at the three Savannah River Site nonreactor nuclear facilities will be examined. This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the accidents to be used in the future study.

  4. Modeling HIV-1 Drug Resistance as Episodic Directional Selection

    PubMed Central

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L.; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance. PMID:22589711

  5. Modeling selective intergranular oxidation of binary alloys.

    PubMed

    Xu, Zhijie; Li, Dongsheng; Schreiber, Daniel K; Rosso, Kevin M; Bruemmer, Stephen M

    2015-01-01

    Intergranular attack of alloys under hydrothermal conditions is a complex problem that depends on metal and oxygen transport kinetics via solid-state and channel-like pathways to an advancing oxidation front. Experiments reveal very different rates of intergranular attack and minor element depletion distances ahead of the oxidation front for nickel-based binary alloys depending on the minor element. For example, a significant Cr depletion up to 9 μm ahead of grain boundary crack tips was documented for Ni-5Cr binary alloy, in contrast to relatively moderate Al depletion for Ni-5Al (∼100 s of nm). We present a mathematical kinetics model that adapts Wagner's model for thick film growth to intergranular attack of binary alloys. The transport coefficients of elements O, Ni, Cr, and Al in bulk alloys and along grain boundaries were estimated from the literature. For planar surface oxidation, a critical concentration of the minor element can be determined from the model where the oxide of minor element becomes dominant over the major element. This generic model for simple grain boundary oxidation can predict oxidation penetration velocities and minor element depletion distances ahead of the advancing front that are comparable to experimental data. The significant distance of depletion of Cr in Ni-5Cr in contrast to the localized Al depletion in Ni-5Al can be explained by the model due to the combination of the relatively faster diffusion of Cr along the grain boundary and slower diffusion in bulk grains, relative to Al. PMID:25573575

  6. Rubber yield prediction by meteorological conditions using mixed models and multi-model inference techniques.

    PubMed

    Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim

    2015-12-01

    Linear mixed models were developed and used to predict rubber (Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %. PMID:25824122

  7. Rubber yield prediction by meteorological conditions using mixed models and multi-model inference techniques

    NASA Astrophysics Data System (ADS)

    Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim

    2015-12-01

    Linear mixed models were developed and used to predict rubber ( Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %.

  8. Remedial action selection using groundwater modeling

    SciTech Connect

    Haddad, B.I.; Parish, G.B.; Hauge, L.

    1996-12-31

    An environmental investigation uncovered petroleum contamination at a gasoline station in southern Wisconsin. The site was located in part of the ancestral Rock River valley in Rock County, Wisconsin where the valley is filled with sands and gravels. Groundwater pump tests were conducted for determination of aquifer properties needed to plan a remediation system; the results were indicative of a very high hydraulic conductivity. The site hydrogeology was modeled using the U.S. Geological Survey`s groundwater model, Modflow. The calibrated model was used to determine the number, pumping rate, and configuration of recovery wells to remediate the site. The most effective configuration was three wells pumping at 303 liters per minute (1/m) (80 gallons per minute (gpm)), producing a total pumping rate of 908 l/m (240 gpm). Treating 908 l/min (240 gpm) or 1,308,240 liters per day (345,600 gallons per day) constituted a significant volume to be treated and discharged. It was estimated that pumping for the two year remediation would cost $375,000 while the air sparging would cost $200,000. The recommended remedial system consisted of eight air sparging wells and four vapor recovery laterals. The Wisconsin Department of Natural Resources (WDNR) approved the remedial action plan in March, 1993. After 11 months of effective operation the concentrations of removed VOCs had decreased by 94 percent and groundwater sampling indicated no detectable concentrations of gasoline contaminants. Groundwater modeling was an effective technique to determine the economic feasibility of a groundwater remedial alternative.

  9. flankr: An R package implementing computational models of attentional selectivity.

    PubMed

    Grange, James A

    2016-06-01

    The Eriksen flanker task (Eriksen and Eriksen, Perception & Psychophysics, 16, 143-149, 1974) is a classic test in cognitive psychology of visual selective attention. Two recent computational models have formalised the dynamics of the apparent increasing attentional selectivity during stimulus processing, but with very different theoretical underpinnings: The shrinking spotlight (SSP) model (White et al., Cognitive Psychology, 210-238, 2011) assumes attentional selectivity improves in a gradual, continuous manner; the dual stage two phase (DSTP) model (Hübner et al., Psychological Review, 759-784, 2010) assumes attentional selectivity changes from a low- to a high-mode of selectivity at a discrete time-point. This paper presents an R package-flankr-that instantiates both computational models. flankr allows the user to simulate data from both models, and to fit each model to human data. flankr provides statistics of the goodness-of-fit to human data, allowing users to engage in competitive model comparison of the DSTP and the SSP models on their own data. It is hoped that the utility of flankr lies in allowing more researchers to engage in the important issue of the dynamics of attentional selectivity. PMID:26174713

  10. Selecting Research Collections for Digitization: Applying the Harvard Model.

    ERIC Educational Resources Information Center

    Brancolini, Kristine R.

    2000-01-01

    Librarians at Harvard University have written the most comprehensive guide to selecting research collections for digitization. This article applies the Harvard Model to a digitization project at Indiana University in order to evaluate the appropriateness of the model for use at another institution and to adapt the model to local needs. (Contains 7…

  11. An Evaluation of Some Models for Culture-Fair Selection.

    ERIC Educational Resources Information Center

    Petersen, Nancy S.; Novick, Melvin R.

    Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…

  12. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  13. A Conditional Logit Model of Collegiate Major Selection.

    ERIC Educational Resources Information Center

    Milley, Donald J.; Bee, Richard H.

    1982-01-01

    Hypothesizes a conditional logit model of decision making to explain collegiate major selection. Results suggest a link between student environment and preference structure and preference structures and student major selection. Suggests findings are limited by use of a largely commuter student population. (KMF)

  14. Augmented Self-Modeling as an Intervention for Selective Mutism

    ERIC Educational Resources Information Center

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  15. A Working Model of Natural Selection Illustrated by Table Tennis

    ERIC Educational Resources Information Center

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  16. Determinants of wood thrush nest success: A multi-scale, model selection approach

    USGS Publications Warehouse

    Driscoll, M.J.L.; Donovan, T.; Mickey, R.; Howard, A.; Fleming, K.K.

    2005-01-01

    We collected data on 212 wood thrush (Hylocichla mustelina) nests in central New York from 1998 to 2000 to determine the factors that most strongly influence nest success. We used an information-theoretic approach to assess and rank 9 models that examined the relationship between nest success (i.e., the probability that a nest would successfully fledge at least 1 wood thrush offspring) and habitat conditions at different spatial scales. We found that 4 variables were significant predictors of nesting success for wood thrushes: (1) total core habitat within 5 km of a study site, (2) distance to forest-field edge, (3) total forest cover within 5 km of the study site, and (4) density and variation in diameter of trees and shrubs surrounding the nest. The coefficients of these predictors were all positive. Of the 9 models evaluated, amount of core habitat in the 5-km landscape was the best-fit model, but the vegetation structure model (i.e., the density of trees and stems surrounding a nest) was also supported by the data. Based on AIC weights, enhancement of core area is likely to be a more effective management option than any other habitat-management options explored in this study. Bootstrap analysis generally confirmed these results; core and vegetation structure models were ranked 1, 2, or 3 in over 50% of 1,000 bootstrap trials. However, bootstrap results did not point to a decisive model, which suggests that multiple habitat factors are influencing wood thrush nesting success. Due to model uncertainty, we used a model averaging approach to predict the success or failure of each nest in our dataset. This averaged model was able to correctly predict 61.1% of nest outcomes.

  17. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  18. Forecasting Tuberculosis Incidence in Iran Using Box-Jenkins Models

    PubMed Central

    Moosazadeh, Mahmood; Nasehi, Mahshid; Bahrampour, Abbas; Khanjani, Narges; Sharafi, Saeed; Ahmadi, Shanaz

    2014-01-01

    Background: Predicting the incidence of tuberculosis (TB) plays an important role in planning health control strategies for the future, developing intervention programs and allocating resources. Objectives: The present longitudinal study estimated the incidence of tuberculosis in 2014 using Box-Jenkins methods. Materials and Methods: Monthly data of tuberculosis cases recorded in the surveillance system of Iran tuberculosis control program from 2005 till 2011 was used. Data was reviewed regarding normality, variance equality and stationary conditions. The parameters p, d and q and P, D and Q were determined, and different models were examined. Based on the lowest levels of AIC and BIC, the most suitable model was selected among the models whose overall adequacy was confirmed. Results: During 84 months, 63568 TB patients were recorded. The average was 756.8 (SD = 11.9) TB cases a month. SARIMA (0,1,1)(0,1,1)12 with the lowest level of AIC (12.78) was selected as the most adequate model for prediction. It was predicted that the total nationwide TB cases for 2014 will be about 16.75 per 100,000 people. Conclusions: Regarding the cyclic pattern of TB recorded cases, Box-Jenkins and SARIMA models are suitable for predicting its prevalence in future. Moreover, prediction results show an increasing trend of TB cases in Iran. PMID:25031852

  19. Development, Selection, and Validation of Tumor Growth Models

    NASA Astrophysics Data System (ADS)

    Shahmoradi, Amir; Lima, Ernesto; Oden, J. Tinsley

    In recent years, a multitude of different mathematical approaches have been taken to develop multiscale models of solid tumor growth. Prime successful examples include the lattice-based, agent-based (off-lattice), and phase-field approaches, or a hybrid of these models applied to multiple scales of tumor, from subcellular to tissue level. Of overriding importance is the predictive power of these models, particularly in the presence of uncertainties. This presentation describes our attempt at developing lattice-based, agent-based and phase-field models of tumor growth and assessing their predictive power through new adaptive algorithms for model selection and model validation embodied in the Occam Plausibility Algorithm (OPAL), that brings together model calibration, determination of sensitivities of outputs to parameter variances, and calculation of model plausibilities for model selection. Institute for Computational Engineering and Sciences.

  20. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  1. A guide to Bayesian model selection for ecologists

    USGS Publications Warehouse

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  2. Selected aspects of modelling monetary transmission mechanism by BVAR model

    NASA Astrophysics Data System (ADS)

    Vaněk, Tomáš; Dobešová, Anna; Hampel, David

    2013-10-01

    In this paper we use the BVAR model with the specifically defined prior to evaluate data including high-lag dependencies. The results are compared to both restricted and common VAR model. The data depicts the monetary transmission mechanism in the Czech Republic and Slovakia from January 2002 to February 2013. The results point to the inadequacy of the common VAR model. The restricted VAR model and the BVAR model appear to be similar in the sense of impulse responses.

  3. Multicriteria framework for selecting a process modelling language

    NASA Astrophysics Data System (ADS)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  4. Monthly streamflow prediction in the Volta Basin of West Africa: A SISO NARMAX polynomial modelling

    NASA Astrophysics Data System (ADS)

    Amisigo, B. A.; van de Giesen, N.; Rogers, C.; Andah, W. E. I.; Friesen, J.

    Single-input-single-output (SISO) non-linear system identification techniques were employed to model monthly catchment runoff at selected gauging sites in the Volta Basin of West Africa. NARMAX (Non-linear Autoregressive Moving Average with eXogenous Input) polynomial models were fitted to basin monthly rainfall and gauging station runoff data for each of the selected sites and used to predict monthly runoff at the sites. An error reduction ratio (ERR) algorithm was used to order regressors for various combinations of input, output and noise lags (various model structures) and the significant regressors for each model selected by applying an Akaike Information Criterion (AIC) to independent rainfall-runoff validation series. Model parameters were estimated from the Matlab REGRESS function (an orthogonal least squares method). In each case, the sub-model without noise terms was fitted first followed by a fitting of the noise model. The coefficient of determination ( R-squared), the Nash-Sutcliffe Efficiency criterion (NSE) and the F statistic for the estimation (training) series were used to evaluate the significance of fit of each model to this series while model selection from the range of models fitted for each gauging site was done by examining the NSEs and the AICs of the validation series. Monthly runoff predictions from the selected models were very good, and the polynomial models appeared to have captured a good part of the rainfall-runoff non-linearity. The results indicate that the NARMAX modelling framework is suitable for monthly river runoff prediction in the Volta Basin. The several good models made available by the NARMAX modelling framework could be useful in the selection of model structures that also provide insights into the physical behaviour of the catchment rainfall-runoff system.

  5. Modeling Selection and Extinction Mechanisms of Biological Systems

    NASA Astrophysics Data System (ADS)

    Amirjanov, Adil

    In this paper, the behavior of a genetic algorithm is modeled to enhance its applicability as a modeling tool of biological systems. A new description model for selection mechanism is introduced which operates on a portion of individuals of population. The extinction and recolonization mechanism is modeled, and solving the dynamics analytically shows that the genetic drift in the population with extinction/recolonization is doubled. The mathematical analysis of the interaction between selection and extinction/recolonization processes is carried out to assess the dynamics of motion of the macroscopic statistical properties of population. Computer simulations confirm that the theoretical predictions of described models are in good approximations. A mathematical model of GA dynamics was also examined, which describes the anti-predator vigilance in an animal group with respect to a known analytical solution of the problem, and showed a good agreement between them to find the evolutionarily stable strategies.

  6. Modeling quality attributes and metrics for web service selection

    NASA Astrophysics Data System (ADS)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  7. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  8. Dynamic selection of models for a ventilator-management advisor.

    PubMed Central

    Rutledge, G. W.

    1993-01-01

    A ventilator-management advisor (VMA) is a computer program that monitors patients who are treated with a mechanical ventilator. A VMA implements a patient-specific physiologic model to interpret patient data and to predict the effects of alternative control settings for the ventilator. Because a VMA evaluates its physiologic model repeatedly during each cycle of data interpretation, highly complex models may require more computation time than is available in this time-critical application. On the other hand, less complex models may be inaccurate if they are unable to represent a patient's physiologic abnormalities. For each patient, a VMA should select a model that balances the tradeoff of prediction accuracy and computation-time complexity. I present a method to select models that are at an appropriate level of detail for time-constrained decision tasks. The method is based on a local search in a graph of models (GoM) for a model that maximizes the tradeoff of computation-time complexity and prediction accuracy. For each model under consideration, a belief network computes a probability of model adequacy given the qualitative prior information, and the goodness of fit of the model to the data provides a measure of the conditional probability of adequacy given the quantitative observations. I apply this method to the problem of model selection for a VMA. I describe an implementation of a graph of physiologic models that range in complexity from VentPlan, a simple model with 3 compartments, to VentSim, a multicompartment model with detailed airway, circulation and mechanical ventilator components.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:8130492

  9. Model Selection in Historical Research Using Approximate Bayesian Computation

    PubMed Central

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  10. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  11. Bayesian Nonlinear Model Selection for Gene Regulatory Networks

    PubMed Central

    Ni, Yang; Stingo, Francesco C.; Baladandayuthapani, Veerabhadran

    2015-01-01

    Summary Gene regulatory networks represent the regulatory relationships between genes and their products and are important for exploring and defining the underlying biological processes of cellular systems. We develop a novel framework to recover the structure of nonlinear gene regulatory networks using semiparametric spline-based directed acyclic graphical models. Our use of splines allows the model to have both flexibility in capturing nonlinear dependencies as well as control of overfitting via shrinkage, using mixed model representations of penalized splines. We propose a novel discrete mixture prior on the smoothing parameter of the splines that allows for simultaneous selection of both linear and nonlinear functional relationships as well as inducing sparsity in the edge selection. Using simulation studies, we demonstrate the superior performance of our methods in comparison with several existing approaches in terms of network reconstruction and functional selection. We apply our methods to a gene expression dataset in glioblastoma multiforme, which reveals several interesting and biologically relevant nonlinear relationships. PMID:25854759

  12. Empirical extensions of the lasso penalty to reduce the false discovery rate in high-dimensional Cox regression models.

    PubMed

    Ternès, Nils; Rotolo, Federico; Michiels, Stefan

    2016-07-10

    Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26970107

  13. Uncertain programming models for portfolio selection with uncertain returns

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Peng, Jin; Li, Shengguo

    2015-10-01

    In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.

  14. The E-MS Algorithm: Model Selection with Incomplete Data

    PubMed Central

    Jiang, Jiming; Nguyen, Thuan; Rao, J. Sunil

    2014-01-01

    We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains. PMID:26783375

  15. Fixation probability in a two-locus intersexual selection model.

    PubMed

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. PMID:27059474

  16. Variable selection in strong hierarchical semiparametric models for longitudinal data

    PubMed Central

    Zeng, Xianbin; Ma, Shuangge; Qin, Yichen; Li, Yang

    2015-01-01

    In this paper, we consider the variable selection problem in semiparametric additive partially linear models for longitudinal data. Our goal is to identify relevant main effects and corresponding interactions associated with the response variable. Meanwhile, we enforce the strong hierarchical restriction on the model, that is, an interaction can be included in the model only if both the associated main effects are included. Based on B-splines basis approximation for the nonparametric components, we propose an iterative estimation procedure for the model by penalizing the likelihood with a partial group minimax concave penalty (MCP), and use BIC to select the tuning parameter. To further improve the estimation efficiency, we specify the working covariance matrix by maximum likelihood estimation. Simulation studies indicate that the proposed method tends to consistently select the true model and works efficiently in estimation and prediction with finite samples, especially when the true model obeys the strong hierarchy. Finally, the China Stock Market data are fitted with the proposed model to illustrate its effectiveness. PMID:27076867

  17. A model of selective masking in chromatic detection.

    PubMed

    Shepard, Timothy G; Swanson, Emily A; McCarthy, Comfrey L; Eskew, Rhea T

    2016-07-01

    Narrowly tuned, selective noise masking of chromatic detection has been taken as evidence for the existence of a large number of color mechanisms (i.e., higher order color mechanisms). Here we replicate earlier observations of selective masking of tests in the (L,M) plane of cone space when the noise is placed near the corners of the detection contour. We used unipolar Gaussian blob tests with three different noise color directions, and we show that there are substantial asymmetries in the detection contours-asymmetries that would have been missed with bipolar tests such as Gabor patches. We develop a new chromatic detection model, which is based on probability summation of linear cone combinations, and incorporates a linear contrast energy versus noise power relationship that predicts how the sensitivity of these mechanisms changes with noise contrast and chromaticity. With only six unipolar color mechanisms (the same number as the cardinal model), the new model accounts for the threshold contours across the different noise conditions, including the asymmetries and the selective effects of the noises. The key for producing selective noise masking in the (L,M) plane is having more than two mechanisms with opposed L- and M-cone inputs, in which case selective masking can be produced without large numbers of color mechanisms. PMID:27442723

  18. Model systems, taxonomic bias, and sexual selection: beyond Drosophila.

    PubMed

    Zuk, Marlene; Garcia-Gonzalez, Francisco; Herberstein, Marie Elisabeth; Simmons, Leigh W

    2014-01-01

    Although model systems are useful in entomology, allowing generalizations based on a few well-known species, they also have drawbacks. It can be difficult to know how far to generalize from information in a few species: Are all flies like Drosophila? The use of model systems is particularly problematic in studying sexual selection, where variability among taxa is key to the evolution of different behaviors. A bias toward the use of a few insect species, particularly from the genus Drosophila, is evident in the sexual selection and sexual conflict literature over the past several decades, although the diversity of study organisms has increased more recently. As the number of model systems used to study sexual conflict increased, support for the idea that sexual interactions resulted in harm to females decreased. Future work should choose model systems thoughtfully, combining well-known species with those that can add to the variation that allows us to make more meaningful generalizations. PMID:24160422

  19. Model-based sensor location selection for helicopter gearbox monitoring

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Wang, Keming; Danai, Kourosh; Lewicki, David G.

    1996-01-01

    A new methodology is introduced to quantify the significance of accelerometer locations for fault diagnosis of helicopter gearboxes. The basis for this methodology is an influence model which represents the effect of various component faults on accelerometer readings. Based on this model, a set of selection indices are defined to characterize the diagnosability of each component, the coverage of each accelerometer, and the relative redundancy between the accelerometers. The effectiveness of these indices is evaluated experimentally by measurement-fault data obtained from an OH-58A main rotor gearbox. These data are used to obtain a ranking of individual accelerometers according to their significance in diagnosis. Comparison between the experimentally obtained rankings and those obtained from the selection indices indicates that the proposed methodology offers a systematic means for accelerometer location selection.

  20. Towards a Personalized Task Selection Model with Shared Instructional Control

    ERIC Educational Resources Information Center

    Corbalan, Gemma; Kester, Liesbeth; Van Merrienboer, Jeroen J. G.

    2006-01-01

    Modern education emphasizes the need to flexibly personalize learning tasks to individual learners. This article discusses a personalized task-selection model with shared instructional control based on two current tendencies for the dynamic sequencing of learning tasks: (1) personalization by an instructional agent which makes sequencing decisions…

  1. Measures and limits of models of fixation selection.

    PubMed

    Wilming, Niklas; Betz, Torsten; Kietzmann, Tim C; König, Peter

    2011-01-01

    Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure) and the KL-divergence (a distance measure of probability distributions) combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced. PMID:21931638

  2. Selecting best-fit models for estimating the body mass from 3D data of the human calcaneus.

    PubMed

    Jung, Go-Un; Lee, U-Young; Kim, Dong-Ho; Kwak, Dai-Soon; Ahn, Yong-Woo; Han, Seung-Ho; Kim, Yi-Suk

    2016-05-01

    Body mass (BM) estimation could facilitate the interpretation of skeletal materials in terms of the individual's body size and physique in forensic anthropology. However, few metric studies have tried to estimate BM by focusing on prominent biomechanical properties of the calcaneus. The purpose of this study was to prepare best-fit models for estimating BM from the 3D human calcaneus by two major linear regression analysis (the heuristic statistical and all-possible-regressions techniques) and validate the models through predicted residual sum of squares (PRESS) statistics. A metric analysis was conducted based on 70 human calcaneus samples (29 males and 41 females) taken from 3D models in the Digital Korean Database and 10 variables were measured for each sample. Three best-fit models were postulated by F-statistics, Mallows' Cp, and Akaike information criterion (AIC) and Bayes information criterion (BIC) for each available candidate models. Finally, the most accurate regression model yields lowest %SEE and 0.843 of R(2). Through the application of leave-one-out cross validation, the predictive power was indicated a high level of validation accuracy. This study also confirms that the equations for estimating BM using 3D models of human calcaneus will be helpful to establish identification in forensic cases with consistent reliability. PMID:26970867

  3. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  4. Noise Level Estimation for Model Selection in Kernel PCA Denoising.

    PubMed

    Varon, Carolina; Alzate, Carlos; Suykens, Johan A K

    2015-11-01

    One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316

  5. Model selection as a science driver for dark energy surveys

    NASA Astrophysics Data System (ADS)

    Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin

    2006-07-01

    A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.

  6. Modeling Selective Availability of the NAVSTAR Global Positioning System

    NASA Technical Reports Server (NTRS)

    Braasch, Michael

    1990-01-01

    As the development of the NAVSTAR Global Positioning System (GPS) continues, there will increasingly be the need for a software centered signal model. This model must accurately generate the observed pseudorange which would typically be encountered. The observed pseudorange varies from the true geometric (slant) range due to range measurement errors. Errors in range measurement stem from a variety of hardware and environment factors. These errors are classified as either deterministic or random and, where appropriate, their models are summarized. Of particular interest is the model for Selective Availability which is derived from actual GPS data. The procedure for the determination of this model, known as the System Identification Theory, is briefly outlined. The synthesis of these error sources into the final signal model is given along with simulation results.

  7. Input Variable Selection for Hydrologic Modeling Using Anns

    NASA Astrophysics Data System (ADS)

    Ganti, R.; Jain, A.

    2011-12-01

    The use of artificial neural network (ANN) models in water resources applications has grown considerably over the last couple of decades. In learning problems, where a connectionist network is trained with a finite sized training set, better generalization performance is often obtained when unneeded weights in the network are eliminated. One source of unneeded weights comes from the inclusion of input variables that provide little information about the output variables. Hence, in the ANN modeling methodology, one of the approaches that has received little attention, is the selection of appropriate model inputs. In the past, different methods have been used for identifying and eliminating these input variables. Normally, linear methods of Auto Correlation Function (ACF) and Partial Auto Correlation Function (PACF) have been adopted. For nonlinear physical systems e.g. hydrological systems, model inputs selected based on the linear correlation analysis among input and output variables cannot assure to capture the non-linearity in the system. In the present study, two of the non-linear methods have been explored for the Input Variable Selection (IVS). The linear method employing ACF and PACF is also used for comparison purposes. The first non-linear method utilizes a measure of the Mutual Information Criterion (MIC) to characterize the dependence between a potential model input and the output, which is a step wise input selection procedure. The second non-linear method is improvement over the first method which eliminates redundant inputs based on a partial measure of mutual information criterion (PMIC), which is also a step wise procedure. Further, the number of input variables to be considered for the development of ANN model was determined using the Principal Component Analysis (PCA), which previously used to be done by trial and error approach. The daily river flow data derived from Godavari River Basin @ Polavaram, Andhra Pradesh, India, and the daily average

  8. Discriminative Feature Selection via Multiclass Variable Memory Markov Model

    NASA Astrophysics Data System (ADS)

    Slonim, Noam; Bejerano, Gill; Fine, Shai; Tishby, Naftali

    2003-12-01

    We propose a novel feature selection method based on a variable memory Markov (VMM) model. The VMM was originally proposed as a generative model trying to preserve the original source statistics from training data. We extend this technique to simultaneously handle several sources, and further apply a new criterion to prune out nondiscriminative features out of the model. This results in a multiclass discriminative VMM (DVMM), which is highly efficient, scaling linearly with data size. Moreover, we suggest a natural scheme to sort the remaining features based on their discriminative power with respect to the sources at hand. We demonstrate the utility of our method for text and protein classification tasks.

  9. Supplier Selection in Virtual Enterprise Model of Manufacturing Supply Network

    NASA Astrophysics Data System (ADS)

    Kaihara, Toshiya; Opadiji, Jayeola F.

    The market-based approach to manufacturing supply network planning focuses on the competitive attitudes of various enterprises in the network to generate plans that seek to maximize the throughput of the network. It is this competitive behaviour of the member units that we explore in proposing a solution model for a supplier selection problem in convergent manufacturing supply networks. We present a formulation of autonomous units of the network as trading agents in a virtual enterprise network interacting to deliver value to market consumers and discuss the effect of internal and external trading parameters on the selection of suppliers by enterprise units.

  10. Inference for blocked randomization under a selection bias model.

    PubMed

    Kennes, Lieven N; Rosenberger, William F; Hilgers, Ralf-Dieter

    2015-12-01

    We provide an asymptotic test to analyze randomized clinical trials that may be subject to selection bias. For normally distributed responses, and under permuted block randomization, we derive a likelihood ratio test of the treatment effect under a selection bias model. A likelihood ratio test of the presence of selection bias arises from the same formulation. We prove that the test is asymptotically chi-square on one degree of freedom. These results correlate well with the likelihood ratio test of Ivanova et al. (2005, Statistics in Medicine 24, 1537-1546) for binary responses, for which they established by simulation that the asymptotic distribution is chi-square. Simulations also show that the test is robust to departures from normality and under another randomization procedure. We illustrate the test by reanalyzing a clinical trial on retinal detachment. PMID:26099068

  11. Broken selection rule in the quantum Rabi model

    PubMed Central

    Forn-Díaz, P.; Romero, G.; Harmans, C. J. P. M.; Solano, E.; Mooij, J. E.

    2016-01-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models. PMID:27273346

  12. Broken selection rule in the quantum Rabi model

    NASA Astrophysics Data System (ADS)

    Forn-Díaz, P.; Romero, G.; Harmans, C. J. P. M.; Solano, E.; Mooij, J. E.

    2016-06-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.

  13. Broken selection rule in the quantum Rabi model.

    PubMed

    Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E

    2016-01-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models. PMID:27273346

  14. Stationary solutions for metapopulation Moran models with mutation and selection.

    PubMed

    Constable, George W A; McKane, Alan J

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation. PMID:25871148

  15. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    SciTech Connect

    Blanchard, A.

    1998-12-17

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities.

  16. Selection of Models for Ingestion Pathway and Relocation

    SciTech Connect

    Blanchard, A.; Thompson, J.M.

    1998-11-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways.

  17. Selection of Models for Ingestion Pathway and Relocation

    SciTech Connect

    Blanchard, A.; Thompson, J.M.

    1999-02-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways.

  18. Model-based rational strategy for chromatographic resin selection.

    PubMed

    Nfor, Beckley K; Zuluaga, Diego S; Verheijen, Peter J T; Verhaert, Peter D E M; van der Wielen, Luuk A M; Ottens, Marcel

    2011-01-01

    A model-based rational strategy for the selection of chromatographic resins is presented. The main question being addressed is that of selecting the most optimal chromatographic resin from a few promising alternatives. The methodology starts with chromatographic modeling,parameters acquisition, and model validation, followed by model-based optimization of the chromatographic separation for the resins of interest. Finally, the resins are rationally evaluated based on their optimized operating conditions and performance metrics such as product purity, yield, concentration, throughput, productivity, and cost. Resin evaluation proceeds by two main approaches. In the first approach, Pareto frontiers from multi-objective optimization of conflicting objectives are overlaid for different resins, enabling direct visualization and comparison of resin performances based on the feasible solution space. The second approach involves the transformation of the resin performances into weighted resin scores, enabling the simultaneous consideration of multiple performance metrics and the setting of priorities. The proposed model-based resin selection strategy was illustrated by evaluating three mixed mode adsorbents (ADH, PPA, and HEA) for the separation of a ternary mixture of bovine serum albumin, ovalbumin, and amyloglucosidase. In order of decreasing weighted resin score or performance, the top three resins for this separation were ADH [PPA[HEA. The proposed model-based approach could be a suitable alternative to column scouting during process development, the main strengths being that minimal experimentation is required and resins are evaluated under their ideal working conditions, enabling a fair comparison. This work also demonstrates the application of column modeling and optimization to mixed mode chromatography. PMID:22238769

  19. Multiobjective optimization for model selection in kernel methods in regression.

    PubMed

    You, Di; Benitez-Quiroz, Carlos Fabian; Martinez, Aleix M

    2014-10-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-versus-variance tradeoff. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a tradeoff between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition, and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared with methods in the state of the art. PMID:25291740

  20. Multiobjective Optimization for Model Selection in Kernel Methods in Regression

    PubMed Central

    You, Di; Benitez-Quiroz, C. Fabian; Martinez, Aleix M.

    2016-01-01

    Regression plays a major role in many scientific and engineering problems. The goal of regression is to learn the unknown underlying function from a set of sample vectors with known outcomes. In recent years, kernel methods in regression have facilitated the estimation of nonlinear functions. However, two major (interconnected) problems remain open. The first problem is given by the bias-vs-variance trade-off. If the model used to estimate the underlying function is too flexible (i.e., high model complexity), the variance will be very large. If the model is fixed (i.e., low complexity), the bias will be large. The second problem is to define an approach for selecting the appropriate parameters of the kernel function. To address these two problems, this paper derives a new smoothing kernel criterion, which measures the roughness of the estimated function as a measure of model complexity. Then, we use multiobjective optimization to derive a criterion for selecting the parameters of that kernel. The goal of this criterion is to find a trade-off between the bias and the variance of the learned function. That is, the goal is to increase the model fit while keeping the model complexity in check. We provide extensive experimental evaluations using a variety of problems in machine learning, pattern recognition and computer vision. The results demonstrate that the proposed approach yields smaller estimation errors as compared to methods in the state of the art. PMID:25291740

  1. Modeling selective pressures on phytoplankton in the global ocean.

    PubMed

    Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W

    2010-01-01

    Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and

  2. The Coalescent Process in Models with Selection and Recombination

    PubMed Central

    Hudson, R. R.; Kaplan, N. L.

    1988-01-01

    The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well. PMID:3147214

  3. Space-Time Areal Mixture Model: Relabeling Algorithm and Model Selection Issues.

    PubMed

    Hossain, M M; Lawson, A B; Cai, B; Choi, J; Liu, J; Kirby, R S

    2014-03-01

    With the growing popularity of spatial mixture models in cluster analysis, model selection criteria have become an established tool in the search for parsimony. However, the label-switching problem is often inherent in Bayesian implementation of mixture models and a variety of relabeling algorithms have been proposed. We use a space-time mixture of Poisson regression models with homogeneous covariate effects to illustrate that the best model selected by using model selection criteria does not always support the model that is chosen by the optimal relabeling algorithm. The results are illustrated for real and simulated datasets. The objective is to make the reader aware that if the purpose of statistical modeling is to identify clusters, applying a relabeling algorithm to the model with the best fit may not generate the optimal relabeling. PMID:25221430

  4. Space-Time Areal Mixture Model: Relabeling Algorithm and Model Selection Issues

    PubMed Central

    Hossain, M.M.; Lawson, A.B.; Cai, B.; Choi, J.; Liu, J.; Kirby, R. S.

    2014-01-01

    With the growing popularity of spatial mixture models in cluster analysis, model selection criteria have become an established tool in the search for parsimony. However, the label-switching problem is often inherent in Bayesian implementation of mixture models and a variety of relabeling algorithms have been proposed. We use a space-time mixture of Poisson regression models with homogeneous covariate effects to illustrate that the best model selected by using model selection criteria does not always support the model that is chosen by the optimal relabeling algorithm. The results are illustrated for real and simulated datasets. The objective is to make the reader aware that if the purpose of statistical modeling is to identify clusters, applying a relabeling algorithm to the model with the best fit may not generate the optimal relabeling. PMID:25221430

  5. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  6. Modeling selective attention using a neuromorphic analog VLSI device.

    PubMed

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features. PMID:11112258

  7. Modeling Selective Elimination of Quiescent Cancer Cells from Bone Marrow

    PubMed Central

    Cavnar, Stephen P.; Rickelmann, Andrew D.; Meguiar, Kaille F.; Xiao, Annie; Dosch, Joseph; Leung, Brendan M.; Cai Lesher-Perez, Sasha; Chitta, Shashank; Luker, Kathryn E.; Takayama, Shuichi; Luker, Gary D.

    2015-01-01

    Patients with many types of malignancy commonly harbor quiescent disseminated tumor cells in bone marrow. These cells frequently resist chemotherapy and may persist for years before proliferating as recurrent metastases. To test for compounds that eliminate quiescent cancer cells, we established a new 384-well 3D spheroid model in which small numbers of cancer cells reversibly arrest in G1/G0 phase of the cell cycle when cultured with bone marrow stromal cells. Using dual-color bioluminescence imaging to selectively quantify viability of cancer and stromal cells in the same spheroid, we identified single compounds and combination treatments that preferentially eliminated quiescent breast cancer cells but not stromal cells. A treatment combination effective against malignant cells in spheroids also eliminated breast cancer cells from bone marrow in a mouse xenograft model. This research establishes a novel screening platform for therapies that selectively target quiescent tumor cells, facilitating identification of new drugs to prevent recurrent cancer. PMID:26408255

  8. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  9. Model validation and selection based on inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, Thomas; Carvajal González, Sergio; Hanss, Michael

    2012-10-01

    In this work, a method for the validation of models in general, and the selection of the most appropriate model in particular, is presented. As an industrially relevant example, a Finite Element (FE) model of a brake pad is investigated and identified with particular respect to uncertainties. The identification is based on inverse fuzzy arithmetic and consists of two stages. In the first stage, the eigenfrequencies of the brake pad are considered, and for three different material models, a set of fuzzy-valued parameters is identified on the basis of measurement values. Based on these identified parameters and a resimulation of the system with these parameters, a model validation is performed which takes into account both the model uncertainties and the output uncertainties. In the second stage, the most appropriate material model is used in the FE model for the computation of frequency response functions between excitation point and three measurement points. Again, the parameters of the model are identified on the basis of three corresponding measurement signals and a resimulation is conducted.

  10. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  11. Bayesian model selection applied to artificial neural networks used for water resources modeling

    NASA Astrophysics Data System (ADS)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  12. The hierarchical sparse selection model of visual crowding

    PubMed Central

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable – destroyed due to over-integration in early stage visual processing – recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the “gist” of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding—the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed. PMID:25309360

  13. Selecting Meteorological Input for the Global Modeling Initiative Assessments

    NASA Technical Reports Server (NTRS)

    Strahan, Susan; Douglass, Anne; Prather, Michael; Coy, Larry; Hall, Tim; Rasch, Phil; Sparling, Lynn

    1999-01-01

    The Global Modeling Initiative (GMI) science team has developed a three dimensional chemistry and transport model (CTM) to evaluate the impact of the exhaust of supersonic aircraft on the stratosphere. An important goal of the GMI is to test modules for numerical transport, photochemical integration, and model dynamics within a common framework. This work is focussed on the dependence of the overall assessment on the wind and temperature fields used by the CTM. Three meteorological data sets for the stratosphere were available to GMI: the National Center for Atmospheric Research Community Climate Model (CCM2), the Goddard Earth Observing System Data Assimilation System (GEOS-DAS), and the Goddard Institute for Space Studies general circulation model (GISS-2'). Objective criteria were established by the GMI team to evaluate which of these three data sets provided the best representation of trace gases in the stratosphere today. Tracer experiments were devised to test various aspects of model transport. Stratospheric measurements of long-lived trace gases were selected as a test of the CTM transport. This presentation describes the criteria used in grading the meteorological fields and the resulting choice of wind fields to be used in the GMI assessment. This type of objective model evaluation will lead to a higher level of confidence in these assessments. We suggest that the diagnostic tests shown here be used to augment traditional general circulation model evaluation methods.

  14. ModelOMatic: fast and automated model selection between RY, nucleotide, amino acid, and codon substitution models.

    PubMed

    Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David

    2015-01-01

    Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/. PMID:25209223

  15. Model selection and inference for censored lifetime medical expenditures.

    PubMed

    Johnson, Brent A; Long, Qi; Huang, Yijian; Chansky, Kari; Redman, Mary

    2016-09-01

    Identifying factors associated with increased medical cost is important for many micro- and macro-institutions, including the national economy and public health, insurers and the insured. However, assembling comprehensive national databases that include both the cost and individual-level predictors can prove challenging. Alternatively, one can use data from smaller studies with the understanding that conclusions drawn from such analyses may be limited to the participant population. At the same time, smaller clinical studies have limited follow-up and lifetime medical cost may not be fully observed for all study participants. In this context, we develop new model selection methods and inference procedures for secondary analyses of clinical trial data when lifetime medical cost is subject to induced censoring. Our model selection methods extend a theory of penalized estimating function to a calibration regression estimator tailored for this data type. Next, we develop a novel inference procedure for the unpenalized regression estimator using perturbation and resampling theory. Then, we extend this resampling plan to accommodate regularized coefficient estimation of censored lifetime medical cost and develop postselection inference procedures for the final model. Our methods are motivated by data from Southwest Oncology Group Protocol 9509, a clinical trial of patients with advanced nonsmall cell lung cancer, and our models of lifetime medical cost are specific to this population. But the methods presented in this article are built on rather general techniques and could be applied to larger databases as those data become available. PMID:26689300

  16. UQ-Guided Selection of Physical Parameterizations in Climate Models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Debusschere, B.; Ghan, S.; Rosa, D.; Bulaevskaya, V.; Anderson, G. J.; Chowdhary, K.; Qian, Y.; Lin, G.; Larson, V. E.; Zhang, G. J.; Randall, D. A.

    2015-12-01

    Given two or more parameterizations that represent the same physical process in a climate model, scientists are sometimes faced with difficult decisions about which scheme to choose for their simulations and analysis. These decisions are often based on subjective criteria, such as "which scheme is easier to use, is computationally less expensive, or produces results that look better?" Uncertainty quantification (UQ) and model selection methods can be used to objectively rank the performance of different physical parameterizations by increasing the preference for schemes that fit observational data better, while at the same time penalizing schemes that are overly complex or have excessive degrees-of-freedom. Following these principles, we are developing a perturbed-parameter UQ framework to assist in the selection of parameterizations for a climate model. Preliminary results will be presented on the application of the framework to assess the performance of two alternate schemes for simulating tropical deep convection (CLUBB-SILHS and ZM-trigmem) in the U.S. Dept. of Energy's ACME climate model. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, is supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC), and is released as LLNL-ABS-675799.

  17. Selection of Representative Models for Decision Analysis Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  18. Strategy selection: An introduction to the modeling challenge.

    PubMed

    Marewski, Julian N; Link, Daniela

    2014-01-01

    Modeling the mechanisms that determine how humans and other agents choose among different behavioral and cognitive processes-be they strategies, routines, actions, or operators-represents a paramount theoretical stumbling block across disciplines, ranging from the cognitive and decision sciences to economics, biology, and machine learning. By using the cognitive and decision sciences as a case study, we provide an introduction to what is also known as the strategy selection problem. First, we explain why many researchers assume humans and other animals to come equipped with a repertoire of behavioral and cognitive processes. Second, we expose three descriptive, predictive, and prescriptive challenges that are common to all disciplines which aim to model the choice among these processes. Third, we give an overview of different approaches to strategy selection. These include cost-benefit, ecological, learning, memory, unified, connectionist, sequential sampling, and maximization approaches. We conclude by pointing to opportunities for future research and by stressing that the selection problem is far from being resolved. WIREs Cogn Sci 2014, 5:39-59. doi: 10.1002/wcs.1265 For further resources related to this article, please visit the WIREs website. PMID:26304296

  19. Variable selection method for the identification of epistatic models.

    PubMed

    Holzinger, Emily Rose; Szymczak, Silke; Dasgupta, Abhijit; Malley, James; Li, Qing; Bailey-Wilson, Joan E

    2015-01-01

    Standard analysis methods for genome wide association studies (GWAS) are not robust to complex disease models, such as interactions between variables with small main effects. These types of effects likely contribute to the heritability of complex human traits. Machine learning methods that are capable of identifying interactions, such as Random Forests (RF), are an alternative analysis approach. One caveat to RF is that there is no standardized method of selecting variables so that false positives are reduced while retaining adequate power. To this end, we have developed a novel variable selection method called relative recurrency variable importance metric (r2VIM). This method incorporates recurrency and variance estimation to assist in optimal threshold selection. For this study, we specifically address how this method performs in data with almost completely epistatic effects (i.e. no marginal effects). Our results show that with appropriate parameter settings, r2VIM can identify interaction effects when the marginal effects are virtually nonexistent. It also outperforms logistic regression, which has essentially no power under this type of model when the number of potential features (genetic variants) is large. (All Supplementary Data can be found here: http://research.nhgri.nih.gov/manuscripts/Bailey-Wilson/r2VIM_epi/). PMID:25592581

  20. VARIABLE SELECTION METHOD FOR THE IDENTIFICATION OF EPISTATIC MODELS

    PubMed Central

    HOLZINGER, EMILY ROSE; SZYMCZAK, SILKE; DASGUPTA, ABHIJIT; MALLEY, JAMES; LI, QING; BAILEY-WILSON, JOAN E.

    2014-01-01

    Standard analysis methods for genome wide association studies (GWAS) are not robust to complex disease models, such as interactions between variables with small main effects. These types of effects likely contribute to the heritability of complex human traits. Machine learning methods that are capable of identifying interactions, such as Random Forests (RF), are an alternative analysis approach. One caveat to RF is that there is no standardized method of selecting variables so that false positives are reduced while retaining adequate power. To this end, we have developed a novel variable selection method called relative recurrency variable importance metric (r2VIM). This method incorporates recurrency and variance estimation to assist in optimal threshold selection. For this study, we specifically address how this method performs in data with almost completely epistatic effects (i.e. no marginal effects). Our results show that with appropriate parameter settings, r2VIM can identify interaction effects when the marginal effects are virtually nonexistent. It also outperforms logistic regression, which has essentially no power under this type of model when the number of potential features (genetic variants) is large. (All Supplementary Data can be found here: http://research.nhgri.nih.gov/manuscripts/Bailey-Wilson/r2VIM_epi/). PMID:25592581

  1. Multilevel selection in a resource-based model

    NASA Astrophysics Data System (ADS)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  2. Selecting global climate models for regional climate change studies

    PubMed Central

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652

  3. A Dual-Stage Two-Phase Model of Selective Attention

    ERIC Educational Resources Information Center

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  4. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  5. Selection Strategies for Social Influence in the Threshold Model

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  6. Selection Experiments in the Penna Model for Biological Aging

    NASA Astrophysics Data System (ADS)

    Medeiros, G.; Idiart, M. A.; de Almeida, R. M. C.

    We consider the Penna model for biological aging to investigate correlations between early fertility and late life survival rates in populations at equilibrium. We consider inherited initial reproduction ages together with a reproduction cost translated in a probability that mother and offspring die at birth, depending on the mother age. For convenient sets of parameters, the equilibrated populations present genetic variability in what regards both genetically programmed death age and initial reproduction age. In the asexual Penna model, a negative correlation between early life fertility and late life survival rates naturally emerges in the stationary solutions. In the sexual Penna model, selection experiments are performed where individuals are sorted by initial reproduction age from the equilibrated populations and the separated populations are evolved independently. After a transient, a negative correlation between early fertility and late age survival rates also emerges in the sense that populations that start reproducing earlier present smaller average genetically programmed death age. These effects appear due to the age structure of populations in the steady state solution of the evolution equations. We claim that the same demographic effects may be playing an important role in selection experiments in the laboratory.

  7. Analysis improves selection of rheological model for slurries

    SciTech Connect

    Moftah, K. )

    1993-10-25

    The use of a statistical index of determination can help select a fluid model to describe the rheology of oil well cement slurries. The closer the index is to unity, the better the particular model will describe the actual fluid behavior. Table 1 lists a computer program written in Quick Basic to calculate rheological parameters and an index of determination for the Bingham plastic and power law models. The points used for the calculation of the rheological parameters can be selected from the data set. The skipped points can then be introduced and the calculations continued, not restarted, to obtain the parameters for the full set of data. The two sets of results are then compared for the decision to include or exclude the added points in the regression. The program also calculates the apparent viscosity to help determine where turbulence or high gross error occurred. In addition, the program calculates the confidence interval of the rheological parameters for a 90% level of confidence.

  8. Selection of models to calculate the LLW source term

    SciTech Connect

    Sullivan, T.M. )

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab.

  9. A Successive Selection Method for finite element model updating

    NASA Astrophysics Data System (ADS)

    Gou, Baiyong; Zhang, Weijie; Lu, Qiuhai; Wang, Bo

    2016-03-01

    Finite Element (FE) model can be updated effectively and efficiently by using the Response Surface Method (RSM). However, it often involves performance trade-offs such as high computational cost for better accuracy or loss of efficiency for lots of design parameter updates. This paper proposes a Successive Selection Method (SSM), which is based on the linear Response Surface (RS) function and orthogonal design. SSM rewrites the linear RS function into a number of linear equations to adjust the Design of Experiment (DOE) after every FE calculation. SSM aims to interpret the implicit information provided by the FE analysis, to locate the Design of Experiment (DOE) points more quickly and accurately, and thereby to alleviate the computational burden. This paper introduces the SSM and its application, describes the solution steps of point selection for DOE in detail, and analyzes SSM's high efficiency and accuracy in the FE model updating. A numerical example of a simply supported beam and a practical example of a vehicle brake disc show that the SSM can provide higher speed and precision in FE model updating for engineering problems than traditional RSM.

  10. Continuum model for chiral induced spin selectivity in helical molecules

    SciTech Connect

    Medina, Ernesto; González-Arraga, Luis A.; Finkelstein-Shapiro, Daniel; Mujica, Vladimiro; Berche, Bertrand

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  11. Development of Solar Drying Model for Selected Cambodian Fish Species

    PubMed Central

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2), chi-square (χ2) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  12. Development of solar drying model for selected Cambodian fish species.

    PubMed

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6 °C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg · h(-1). Based on coefficient of determination (R(2)), chi-square (χ(2)) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  13. Variable selection for semiparametric mixed models in longitudinal studies.

    PubMed

    Ni, Xiao; Zhang, Daowen; Zhang, Hao Helen

    2010-03-01

    We propose a double-penalized likelihood approach for simultaneous model selection and estimation in semiparametric mixed models for longitudinal data. Two types of penalties are jointly imposed on the ordinary log-likelihood: the roughness penalty on the nonparametric baseline function and a nonconcave shrinkage penalty on linear coefficients to achieve model sparsity. Compared to existing estimation equation based approaches, our procedure provides valid inference for data with missing at random, and will be more efficient if the specified model is correct. Another advantage of the new procedure is its easy computation for both regression components and variance parameters. We show that the double-penalized problem can be conveniently reformulated into a linear mixed model framework, so that existing software can be directly used to implement our method. For the purpose of model inference, we derive both frequentist and Bayesian variance estimation for estimated parametric and nonparametric components. Simulation is used to evaluate and compare the performance of our method to the existing ones. We then apply the new method to a real data set from a lactation study. PMID:19397585

  14. A qualitative model structure sensitivity analysis method to support model selection

    NASA Astrophysics Data System (ADS)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  15. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  16. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    PubMed

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. PMID:25287611

  17. A Neuronal Network Model for Pitch Selectivity and Representation

    PubMed Central

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  18. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    SciTech Connect

    PORTER, REID B.; LOVELAND, ROHAN; ROSTEN, ED

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  19. Stochastic group selection model for the evolution of altruism

    NASA Astrophysics Data System (ADS)

    Silva, Ana T. C.; Fontanari, J. F.

    We study numerically and analytically a stochastic group selection model in which a population of asexually reproducing individuals, each of which can be either altruist or non-altruist, is subdivided into M reproductively isolated groups (demes) of size N. The cost associated with being altruistic is modelled by assigning the fitness 1- τ, with τ∈[0,1], to the altruists and the fitness 1 to the non-altruists. In the case that the altruistic disadvantage τ is not too large, we show that the finite M fluctuations are small and practically do not alter the deterministic results obtained for M→∞. However, for large τ these fluctuations greatly increase the instability of the altruistic demes to mutations. These results may be relevant to the dynamics of parasite-host systems and, in particular, to explain the importance of mutation in the evolution of parasite virulence.

  20. Radial Domany-Kinzel models with mutation and selection

    NASA Astrophysics Data System (ADS)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  1. Modeling drivers' speed selection as a trade-off behavior.

    PubMed

    Tarko, Andrew P

    2009-05-01

    This paper proposes a new model of driver-preferred speeds derived from the assumption that drivers trade-off a portion of their safety for a time gain. The risk of receiving a ticket for speeding is also considered. A trip disutility concept is selected to combine the three components of speed choice (safety, time, and enforcement). The perceived crash risk and speed enforcement are considered as speed deterrents while the perceived value of a time gain is considered as a speed enticement. According to this concept, speeds that minimize the perceived trip disutility are preferred by drivers. The modeled trade-off behavior does not have to be fully rational since it is affected by drivers' preferences and their ability to perceive the risk. As such, the proposed framework follows the concept of bound rationality. The attractiveness of the model lies in its parameters being estimable with the observed preferred speeds and then interpretable as the factors of risk perception, the subjective value of time, and the perceived risk of speed enforcement. The proposed method may successfully supplement behavioral studies based on a driver survey. The study focuses on four-lane rural and suburban roads in Indiana, USA. The behavior of two types of drivers (trucks and cars) is modeled. The selection of test sites was such that the roads and other local characteristics varied across the studied sites while the population of drivers could be assumed as the same. The density of intersections, land development along the road, and the presence of sidewalks were the identified prominent risk perception factors. Another interesting finding is that the speed limit seems to encourage slow drivers to drive faster and fast drivers to drive slower. PMID:19393813

  2. Radial Domany-Kinzel models with mutation and selection.

    PubMed

    Lavrentovich, Maxim O; Korolev, Kirill S; Nelson, David R

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. "Inflation" at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R(0) expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t(*)=R(0)/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island. PMID:23410279

  3. Using Epidemiological Models and Genetic Selection to Identify Theoretical Opportunities to Reduce Disease Impact

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Selection for disease resistance is a contemporary topic with developing approaches for genetic improvement. Merging the sciences of genetic selection and epidemiology is essential to identify selection schemes to enhance disease resistance. Epidemiological models can identify theoretical opportuni...

  4. Bayesian Model Selection with Network Based Diffusion Analysis.

    PubMed

    Whalen, Andrew; Hoppitt, William J E

    2016-01-01

    A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed. PMID:27092089

  5. Percolation model for selective dissolution of multi-component glasses

    SciTech Connect

    Kale, R.P.; Brinker, C.J.

    1995-03-01

    A percolation model is developed which accounts for most known features of the process of porous glass membrane preparation by selective dissolution of multi-component glasses. The model is founded within tile framework of the classical percolation theory, wherein the components of a glass are represented by random sites on a suitable lattice. Computer simulation is used to mirror the generation of a porous structure during the dissolution process, reproducing many of the features associated with the phenomenon. Simulation results evaluate the effect of the initial composition of the glass on the kinetics of the leaching process as well as the morphology of the generated porous structure. The percolation model establishes the porous structure as a percolating cluster of unreachable constituents in the glass. The simulation algorithm incorporates removal of both, the accessible leachable components in the glass as well as the independent clusters of unreachable components not attached to the percolating cluster. The dissolution process thus becomes limited by the conventional site percolation thresholds of the unreachable components (which restricts the formation of the porous network), as well as the leachable components (which restricts the accessibility of the solvating medium into the glass). The simulation results delineate the range of compositional variations for successful porous glass preparation and predict the variation of porosity, surface area, dissolution rates and effluent composition with initial composition and time. Results compared well with experimental studies and improved upon similar models attempted in die past.

  6. Bayesian Model Selection with Network Based Diffusion Analysis

    PubMed Central

    Whalen, Andrew; Hoppitt, William J. E.

    2016-01-01

    A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed. PMID:27092089

  7. COUNCIL FOR REGULATORY ENVIRONMENTAL MODELING (CREM) PILOT WATER QUALITY MODEL SELECTION TOOL

    EPA Science Inventory

    EPA's Council for Regulatory Environmental Modeling (CREM) is currently supporting the development of a pilot model selection tool that is intended to help the states and the regions implement the total maximum daily load (TMDL) program. This tool will be implemented within the ...

  8. Impact of selected troposphere models on Precise Point Positioning convergence

    NASA Astrophysics Data System (ADS)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  9. On model selections for repeated measurement data in clinical studies.

    PubMed

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. PMID:25645442

  10. A Model for Selection of Eyespots on Butterfly Wings

    PubMed Central

    Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida

    2015-01-01

    Unsolved Problem The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. Key Idea and Model We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus

  11. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  12. Multiphysics modeling of selective laser sintering/melting

    NASA Astrophysics Data System (ADS)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  13. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  14. Estimating a dynamic model of sex selection in China.

    PubMed

    Ebenstein, Avraham

    2011-05-01

    High ratios of males to females in China, which have historically concerned researchers (Sen 1990), have increased in the wake of China's one-child policy, which began in 1979. Chinese policymakers are currently attempting to correct the imbalance in the sex ratio through initiatives that provide financial compensation to parents with daughters. Other scholars have advocated a relaxation of the one-child policy to allow more parents to have a son without engaging in sex selection. In this article, I present a model of fertility choice when parents have access to a sex-selection technology and face a mandated fertility limit. By exploiting variation in fines levied in China for unsanctioned births, I estimate the relative price of a son and daughter for mothers observed in China's census data (1982-2000). I find that a couple's first son is worth 1.42 years of income more than a first daughter, and the premium is highest among less-educated mothers and families engaged in agriculture. Simulations indicate that a subsidy of 1 year of income to families without a son would reduce the number of "missing girls" by 67% but impose an annual cost of 1.8% of Chinese gross domestic product (GDP). Alternatively, a three-child policy would reduce the number of "missing girls" by 56% but increase the fertility rate by 35%. PMID:21594735

  15. Model catalysis by size-selected cluster deposition

    SciTech Connect

    Anderson, Scott

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  16. A clonal selection algorithm model for daily rainfall data prediction.

    PubMed

    Noor Rodi, N S; Malek, M A; Ismail, Amelia Ritahani; Ting, Sie Chun; Tang, Chao-Wei

    2014-01-01

    This study applies the clonal selection algorithm (CSA) in an artificial immune system (AIS) as an alternative method to predicting future rainfall data. The stochastic and the artificial neural network techniques are commonly used in hydrology. However, in this study a novel technique for forecasting rainfall was established. Results from this study have proven that the theory of biological immune systems could be technically applied to time series data. Biological immune systems are nonlinear and chaotic in nature similar to the daily rainfall data. This study discovered that the proposed CSA was able to predict the daily rainfall data with an accuracy of 90% during the model training stage. In the testing stage, the results showed that an accuracy between the actual and the generated data was within the range of 75 to 92%. Thus, the CSA approach shows a new method in rainfall data prediction. PMID:25429452

  17. Analytical Modelling Of Milling For Tool Design And Selection

    SciTech Connect

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-05-17

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools.

  18. Model of selective growth of III-V nanowires

    NASA Astrophysics Data System (ADS)

    Dubrovskii, V. G.

    2015-12-01

    A kinetic model of growth of nanowires of III-V semiconductor compounds (including nitride ones) in the absence of metal catalyst is proposed; these conditions correspond to the methods of selective epitaxy or self-induced growth. A stationary solution for the nanowire growth rate is obtained, which indicates that the growth can be limited by not only the kinetics of III-group element with allowance for the surface diffusion (as was suggested earlier), but also the flow of the V-group element. Different modes are characterized by radically different dependences of the growth rate on the nanowire radius. Under arsenicenriched conditions, a typical dependence with a maximum and decay at large radii (limited by the gallium adatom diffusion) is observed. Under gallium-enriched conditions, there is a transition to the growth rate that is practically independent of the radius and linearly increases with an increase in the arsenic flow.

  19. [Model of the selective calcium channel of characean algae].

    PubMed

    Lunevskiĭ, V Z; Zherelova, O M; Aleksandrov, A A; Vinokurov, M G; Berestovskiĭ, G N

    1980-01-01

    The present work was intended to further investigate the selective filter of calcium channel on both a cell membrane and reconstructed channels. For the studies on cell membranes, an inhibitor of chloride channels was chosen (ethacrynic acid) to pass currents only through the calcium channels. On both the cells and reconstructed channels, permeability of ions of different crystal radii and valencies was investigated. The obtained results suggest that the channel represents a wide water pore with a diameter larger than 8 A into which ions go together with the nearest water shell. The values of the maximal currents are given by electrostatic interaction of the ions with the anion center of the channel. A phenomenological two-barrier model of the channel is given which describes the movement of all the ions studied. PMID:6251921

  20. Bayesian Model Selection in 'Big Data' Spectral Analysis

    NASA Astrophysics Data System (ADS)

    Fischer, Travis C.; Crenshaw, D. Michael; Baron, Fabien; Kloppenborg, Brian K.; Pope, Crystal L.

    2015-01-01

    As IFU observations and large spectral surveys continue to become more prevalent, the handling of thousands of spectra has become common place. Astronomers look at objects with increasingly complex emission-linestructures, so establishing a method that will easily allow for multiple-component analysis of these features in an automated fashion would be of great use to the community. Already used in exoplanet detection and interferometric image reconstruction, we present a new application of Bayesian model selection in `big data' spectral analysis. With this technique, the fitting of multiple emission-line components in an automated fashion while simultaneously determining the correct number of components in each spectrum streamlines the line measurements for a large number of spectra into a single process.

  1. ModelMage: a tool for automatic model generation, selection and management.

    PubMed

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  2. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    SciTech Connect

    Sukumar, Sreenivas R; Nutaro, James J

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.

  3. Selection of hydrologic modeling approaches for climate change assessment: A comparison of model scale and structures

    NASA Astrophysics Data System (ADS)

    Surfleet, Christopher G.; Tullos, Desirèe; Chang, Heejun; Jung, Il-Won

    2012-09-01

    SummaryA wide variety of approaches to hydrologic (rainfall-runoff) modeling of river basins confounds our ability to select, develop, and interpret models, particularly in the evaluation of prediction uncertainty associated with climate change assessment. To inform the model selection process, we characterized and compared three structurally-distinct approaches and spatial scales of parameterization to modeling catchment hydrology: a large-scale approach (using the VIC model; 671,000 km2 area), a basin-scale approach (using the PRMS model; 29,700 km2 area), and a site-specific approach (the GSFLOW model; 4700 km2 area) forced by the same future climate estimates. For each approach, we present measures of fit to historic observations and predictions of future response, as well as estimates of model parameter uncertainty, when available. While the site-specific approach generally had the best fit to historic measurements, the performance of the model approaches varied. The site-specific approach generated the best fit at unregulated sites, the large scale approach performed best just downstream of flood control projects, and model performance varied at the farthest downstream sites where streamflow regulation is mitigated to some extent by unregulated tributaries and water diversions. These results illustrate how selection of a modeling approach and interpretation of climate change projections require (a) appropriate parameterization of the models for climate and hydrologic processes governing runoff generation in the area under study, (b) understanding and justifying the assumptions and limitations of the model, and (c) estimates of uncertainty associated with the modeling approach.

  4. Projection- vs. selection-based model reduction of complex hydro-ecological models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Giuliani, M.; Castelletti, A.; Alsahaf, A.

    2014-12-01

    Projection-based model reduction is one of the most popular approaches used for the identification of reduced-order models (emulators). It is based on the idea of sampling from the original model various values, or snapshots, of the state variables, and then using these snapshots in a projection scheme to find a lower-dimensional subspace that captures the majority of the variation of the original model. The model is then projected onto this subspace and solved, yielding a computationally efficient emulator. Yet, this approach may unnecessarily increase the complexity of the emulator, especially when only a few state variables of the original model are relevant with respect to the output of interest. This is the case of complex hydro-ecological models, which typically account for a variety of water quality processes. On the other hand, selection-based model reduction uses the information contained in the snapshots to select the state variables of the original model that are relevant with respect to the emulator's output, thus allowing for model reduction. This provides a better trade-off between fidelity and model complexity, since the irrelevant and redundant state variables are excluded from the model reduction process. In this work we address these issues by presenting an exhaustive experimental comparison between two popular projection- and selection-based methods, namely Proper Orthogonal Decomposition (POD) and Dynamic Emulation Modelling (DEMo). The comparison is performed on the reduction of DYRESM-CAEDYM, a 1D hydro-ecological model used to describe the in-reservoir water quality conditions of Tono Dam, an artificial reservoir located in western Japan. Experiments on two different output variables (i.e. chlorophyll-a concentration and release water temperature) show that DEMo allows obtaining the same fidelity as POD while reducing the number of state variables in the emulator.

  5. A new approach to modeling covariate effects and individualization in population pharmacokinetics-pharmacodynamics.

    PubMed

    Lai, Tze Leung; Shih, Mei-Chiung; Wong, Samuel P

    2006-02-01

    By combining Laplace's approximation and Monte Carlo methods to evaluate multiple integrals, this paper develops a new approach to estimation in nonlinear mixed effects models that are widely used in population pharmacokinetics and pharmacodynamics. Estimation here involves not only estimating the model parameters from Phase I and II studies but also using the fitted model to estimate the concentration versus time curve or the drug effects of a subject who has covariate information but sparse measurements. Because of its computational tractability, the proposed approach can model the covariate effects nonparametrically by using (i) regression splines or neural networks as basis functions and (ii) AIC or BIC for model selection. Its computational and statistical advantages are illustrated in simulation studies and in Phase I trials. PMID:16402288

  6. Bayesian predictive modeling for genomic based personalized treatment selection.

    PubMed

    Ma, Junsheng; Stingo, Francesco C; Hobbs, Brian P

    2016-06-01

    Efforts to personalize medicine in oncology have been limited by reductive characterizations of the intrinsically complex underlying biological phenomena. Future advances in personalized medicine will rely on molecular signatures that derive from synthesis of multifarious interdependent molecular quantities requiring robust quantitative methods. However, highly parameterized statistical models when applied in these settings often require a prohibitively large database and are sensitive to proper characterizations of the treatment-by-covariate interactions, which in practice are difficult to specify and may be limited by generalized linear models. In this article, we present a Bayesian predictive framework that enables the integration of a high-dimensional set of genomic features with clinical responses and treatment histories of historical patients, providing a probabilistic basis for using the clinical and molecular information to personalize therapy for future patients. Our work represents one of the first attempts to define personalized treatment assignment rules based on large-scale genomic data. We use actual gene expression data acquired from The Cancer Genome Atlas in the settings of leukemia and glioma to explore the statistical properties of our proposed Bayesian approach for personalizing treatment selection. The method is shown to yield considerable improvements in predictive accuracy when compared to penalized regression approaches. PMID:26575856

  7. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  8. Antagonistic versus non-antagonistic models of balancing selection: Characterizing the relative timescales and hitchhiking effects of partial selective sweeps

    PubMed Central

    Connallon, Tim; Clark, Andrew G.

    2012-01-01

    Antagonistically selected alleles -- those with opposing fitness effects between sexes, environments, or fitness components -- represent an important component of additive genetic variance in fitness-related traits, with stably balanced polymorphisms often hypothesized to contribute to observed quantitative genetic variation. Balancing selection hypotheses imply that intermediate-frequency alleles disproportionately contribute to genetic variance of life history traits and fitness. Such alleles may also associate with population genetic footprints of recent selection, including reduced genetic diversity and inflated linkage disequilibrium at linked, neutral sites. Here, we compare the evolutionary dynamics of different balancing selection models, and characterize the evolutionary timescale and hitchhiking effects of partial selective sweeps generated under antagonistic versus non-antagonistic (e.g., overdominant and frequency-dependent selection) processes. We show that that the evolutionary timescales of partial sweeps tend to be much longer, and hitchhiking effects are drastically weaker, under scenarios of antagonistic selection. These results predict an interesting mismatch between molecular population genetic and quantitative genetic patterns of variation. Balanced, antagonistically selected alleles are expected to contribute more to additive genetic variance for fitness than alleles maintained by classic, non-antagonistic mechanisms. Nevertheless, classical mechanisms of balancing selection are much more likely to generate strong population genetic signatures of recent balancing selection. PMID:23461340

  9. Model-based fault detection and identification with online aerodynamic model structure selection

    NASA Astrophysics Data System (ADS)

    Lombaerts, T.

    2013-12-01

    This publication describes a recursive algorithm for the approximation of time-varying nonlinear aerodynamic models by means of a joint adaptive selection of the model structure and parameter estimation. This procedure is called adaptive recursive orthogonal least squares (AROLS) and is an extension and modification of the previously developed ROLS procedure. This algorithm is particularly useful for model-based fault detection and identification (FDI) of aerospace systems. After the failure, a completely new aerodynamic model can be elaborated recursively with respect to structure as well as parameter values. The performance of the identification algorithm is demonstrated on a simulation data set.

  10. Estimating seabed scattering mechanisms via Bayesian model selection.

    PubMed

    Steininger, Gavin; Dosso, Stan E; Holland, Charles W; Dettmer, Jan

    2014-10-01

    A quantitative inversion procedure is developed and applied to determine the dominant scattering mechanism (surface roughness and/or volume scattering) from seabed scattering-strength data. The classification system is based on trans-dimensional Bayesian inversion with the deviance information criterion used to select the dominant scattering mechanism. Scattering is modeled using first-order perturbation theory as due to one of three mechanisms: Interface scattering from a rough seafloor, volume scattering from a heterogeneous sediment layer, or mixed scattering combining both interface and volume scattering. The classification system is applied to six simulated test cases where it correctly identifies the true dominant scattering mechanism as having greater support from the data in five cases; the remaining case is indecisive. The approach is also applied to measured backscatter-strength data where volume scattering is determined as the dominant scattering mechanism. Comparison of inversion results with core data indicates the method yields both a reasonable volume heterogeneity size distribution and a good estimate of the sub-bottom depths at which scatterers occur. PMID:25324059

  11. Binocular rivalry waves in a directionally selective neural field model

    NASA Astrophysics Data System (ADS)

    Carroll, Samuel R.; Bressloff, Paul C.

    2014-10-01

    We extend a neural field model of binocular rivalry waves in the visual cortex to incorporate direction selectivity of moving stimuli. For each eye, we consider a one-dimensional network of neurons that respond maximally to a fixed orientation and speed of a grating stimulus. Recurrent connections within each one-dimensional network are taken to be excitatory and asymmetric, where the asymmetry captures the direction and speed of the moving stimuli. Connections between the two networks are taken to be inhibitory (cross-inhibition). As per previous studies, we incorporate slow adaption as a symmetry breaking mechanism that allows waves to propagate. We derive an analytical expression for traveling wave solutions of the neural field equations, as well as an implicit equation for the wave speed as a function of neurophysiological parameters, and analyze their stability. Most importantly, we show that propagation of traveling waves is faster in the direction of stimulus motion than against it, which is in agreement with previous experimental and computational studies.

  12. A Model and Heuristic for Solving Very Large Item Selection Problems.

    ERIC Educational Resources Information Center

    Swanson, Len; Stocking, Martha L.

    1993-01-01

    A model for solving very large item selection problems is presented. The model builds on binary programming applied to test construction. A heuristic for selecting items that satisfy the constraints in the model is also presented, and various problems are solved using the model and heuristic. (SLD)

  13. Sensitivity analysis for volcanic source modeling quality assessment and model selection

    NASA Astrophysics Data System (ADS)

    Cannavó, Flavio

    2012-07-01

    The increasing knowledge and understanding of volcanic sources has led to the development and implementation of sophisticated and complex mathematical models with the main goal of describing field and experimental data. Quantification of the model's ability in describing the data becomes fundamental for a realistic estimate of the model parameters. The analysis of sensitivity can help us in identifying the parameters that significantly affect the model's output and in assessing its quality factor. In this paper, we describe the Global Sensitivity Analysis (GSA) methods based both on Fourier Amplitude Sensitivity Test and on the Sobol' approach and discuss their implementation in a Matlab software tool (GSAT). We also introduce a new criterion for model selection based on sensitivity analysis. The proposed approach is tested and applied to quantify the fitting ability of an analytic volcanic source model on a synthetic deformation data. Results show the validity of the method, against the traditional approaches, in supporting the volcanic model selection and the flexibility of the GSAT software tool in analyzing the model sensitivity.

  14. Choosing the Optimal Number of Factors in Exploratory Factor Analysis: A Model Selection Perspective

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Zhang, Guangjian; Kim, Cheongtag; Mels, Gerhard

    2013-01-01

    A central problem in the application of exploratory factor analysis is deciding how many factors to retain ("m"). Although this is inherently a model selection problem, a model selection perspective is rarely adopted for this task. We suggest that Cudeck and Henly's (1991) framework can be applied to guide the selection process. Researchers must…

  15. Bayesian model selection for a finite element model of a large civil aircraft

    SciTech Connect

    Hemez, F. M.; Rutherford, A. C.

    2004-01-01

    Nine aircraft stiffness parameters have been varied and used as inputs to a finite element model of an aircraft to generate natural frequency and deflection features (Goge, 2003). This data set (147 input parameter configurations and associated outputs) is now used to generate a metamodel, or a fast running surrogate model, using Bayesian model selection methods. Once a forward relationship is defined, the metamodel may be used in an inverse sense. That is, knowing the measured output frequencies and deflections, what were the input stiffness parameters that caused them?

  16. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2015-01-01

    The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties) are often an essential ingredient of network formation.

  17. Principal Selection in Rural School Districts: A Process Model.

    ERIC Educational Resources Information Center

    Richardson, M. D.; And Others

    Recent research illustrates the increasingly important role of the school principal. As a result, procedures for selecting principals have also become more critical to rural school districts. School systems, particularly rural school districts, are encouraged to adopt systematic, rational means for selecting administrators. Such procedures will…

  18. Selecting the Highly Gifted for Science: Israeli Sciempiad Model.

    ERIC Educational Resources Information Center

    Nevo, Baruch

    1993-01-01

    Israel's Sciempiad is an annual nationwide science competition for students ages 14 to 15, designed to stimulate interest in science and identify students with potential to become leading researchers. This article discusses how the selection procedure was determined and outlines steps in the selection process. (JDD)

  19. Catchment Classification via Hydrologic Modeling: Evaluating the Relative Importance of Model Selection, Parameterization and Classification Techniques

    NASA Astrophysics Data System (ADS)

    Marshall, L. A.; Smith, T. J.; To, L.

    2015-12-01

    Classification has emerged as an important tool for evaluating the runoff generating mechanisms in catchments and for providing a basis on which to group catchments having similar characteristics. These methods are particularly important for transferring models from one catchment to another in the case of data scarce regions or paired catchment studies .In many cases, the goal of catchment classification is to be able to identify models or parameter sets that could be applied to similar catchments for predictive purposes. A potential impediment to this goal is the impact of error in both the classification technique and the hydrologic model. In this study, we examine the relationship between catchment classification, hydrologic models, and model parameterizations for the purpose of transferring models between similar catchments. Building on previous work using a data set of over 100 catchments from south-east Australia, we identify several hydrologic model structures and calibrate each model for each catchment. We use clustering to identify groups of catchments with similar hydrologic response (as characterized through the calibrated model parameters). We examine the dependency of the clustered catchment groups on the pre-selected model, the uncertainty in the calibrated model parameters, and the clustering or classification algorithm. Further, we investigate the relationship between the catchment clusters and certain catchment physical characteristics or signatures, which are more typically used for catchment classification. Overall, our work is aimed at elucidating the potential sources of uncertainty in catchment classification, and the utility of classification for improving hydrologic predictions.

  20. Model selection for factorial Gaussian graphical models with an application to dynamic regulatory networks.

    PubMed

    Vinciotti, Veronica; Augugliaro, Luigi; Abbruzzo, Antonino; Wit, Ernst C

    2016-06-01

    Factorial Gaussian graphical Models (fGGMs) have recently been proposed for inferring dynamic gene regulatory networks from genomic high-throughput data. In the search for true regulatory relationships amongst the vast space of possible networks, these models allow the imposition of certain restrictions on the dynamic nature of these relationships, such as Markov dependencies of low order - some entries of the precision matrix are a priori zeros - or equal dependency strengths across time lags - some entries of the precision matrix are assumed to be equal. The precision matrix is then estimated by l1-penalized maximum likelihood, imposing a further constraint on the absolute value of its entries, which results in sparse networks. Selecting the optimal sparsity level is a major challenge for this type of approaches. In this paper, we evaluate the performance of a number of model selection criteria for fGGMs by means of two simulated regulatory networks from realistic biological processes. The analysis reveals a good performance of fGGMs in comparison with other methods for inferring dynamic networks and of the KLCV criterion in particular for model selection. Finally, we present an application on a high-resolution time-course microarray data from the Neisseria meningitidis bacterium, a causative agent of life-threatening infections such as meningitis. The methodology described in this paper is implemented in the R package sglasso, freely available at CRAN, http://CRAN.R-project.org/package=sglasso. PMID:27023322

  1. Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test

    PubMed Central

    Jiang, Chuanjin; Zhang, Jing; Song, Fugen

    2014-01-01

    Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061

  2. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    PubMed Central

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  3. Increased prediction accuracy in wheat breeding trials using a marker x environment interaction genomic selection model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates for selection. Originally these models were developed without considering genotype ' environment interaction (GE). Several authors have proposed extensions of the cannonical GS model that accomm...

  4. Increased prediction accuracy in wheat breeding trials using a marker x environment interaction genomic selection model

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype x environment interaction (GxE). Several authors have proposed extensions of the single-environment GS model th...

  5. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ... COMMISSION Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies....

  6. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  7. Model selection based on robustness criterion with measurement application

    NASA Astrophysics Data System (ADS)

    Brahim-Belhouari, Sofiane; Fleury, Gilles; Davoust, Marie-Eve

    1999-06-01

    Huber's approach to robust estimation is highly fruitful for solving estimation problems with contaminated data or under incomplete information according to the error structure. A simple selection procedure based on robustness to variations of the errors distribution from the assumed one, is proposed. Minimax M-estimator is used to estimate efficiently the parameters and the measurement quantity. A performance deviation criterion is computed by the mean of the Monte Carlo method improved by the Latin Hypercube Sampling. The selection produced is applied to a real measurement problem, grooves dimensioning using Remote Field Eddy Current inspection.

  8. Stimulus design for model selection and validation in cell signaling.

    PubMed

    Apgar, Joshua F; Toettcher, Jared E; Endy, Drew; White, Forest M; Tidor, Bruce

    2008-02-01

    Mechanism-based chemical kinetic models are increasingly being used to describe biological signaling. Such models serve to encapsulate current understanding of pathways and to enable insight into complex biological processes. One challenge in model development is that, with limited experimental data, multiple models can be consistent with known mechanisms and existing data. Here, we address the problem of model ambiguity by providing a method for designing dynamic stimuli that, in stimulus-response experiments, distinguish among parameterized models with different topologies, i.e., reaction mechanisms, in which only some of the species can be measured. We develop the approach by presenting two formulations of a model-based controller that is used to design the dynamic stimulus. In both formulations, an input signal is designed for each candidate model and parameterization so as to drive the model outputs through a target trajectory. The quality of a model is then assessed by the ability of the corresponding controller, informed by that model, to drive the experimental system. We evaluated our method on models of antibody-ligand binding, mitogen-activated protein kinase (MAPK) phosphorylation and de-phosphorylation, and larger models of the epidermal growth factor receptor (EGFR) pathway. For each of these systems, the controller informed by the correct model is the most successful at designing a stimulus to produce the desired behavior. Using these stimuli we were able to distinguish between models with subtle mechanistic differences or where input and outputs were multiple reactions removed from the model differences. An advantage of this method of model discrimination is that it does not require novel reagents, or altered measurement techniques; the only change to the experiment is the time course of stimulation. Taken together, these results provide a strong basis for using designed input stimuli as a tool for the development of cell signaling models. PMID

  9. Young Children's Selective Learning of Rule Games from Reliable and Unreliable Models

    ERIC Educational Resources Information Center

    Rakoczy, Hannes; Warneken, Felix; Tomasello, Michael

    2009-01-01

    We investigated preschoolers' selective learning from models that had previously appeared to be reliable or unreliable. Replicating previous research, children from 4 years selectively learned novel words from reliable over unreliable speakers. Extending previous research, children also selectively learned other kinds of acts--novel games--from…

  10. Selection Strategies for Univariate Loglinear Smoothing Models and Their Effect on Equating Function Accuracy

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul W.

    2009-01-01

    In this study, we compared 12 statistical strategies proposed for selecting loglinear models for smoothing univariate test score distributions and for enhancing the stability of equipercentile equating functions. The major focus was on evaluating the effects of the selection strategies on equating function accuracy. Selection strategies' influence…

  11. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    NASA Astrophysics Data System (ADS)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  12. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  13. Support interference of wind tunnel models: A selective annotated bibliography

    NASA Technical Reports Server (NTRS)

    Tuttle, M. H.; Gloss, B. B.

    1981-01-01

    This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.

  14. A Four-Step Model for Teaching Selection Interviewing Skills

    ERIC Educational Resources Information Center

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  15. Support interference of wind tunnel models: A selective annotated bibliography

    NASA Technical Reports Server (NTRS)

    Tuttle, M. H.; Lawing, P. L.

    1984-01-01

    This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.

  16. Selection of Authentic Modelling Practices as Contexts for Chemistry Education

    ERIC Educational Resources Information Center

    Prins, Gjalt T.; Bulte, Astrid M. W.; van Driel, Jan H.; Pilot, Albert

    2008-01-01

    In science education, students should come to understand the nature and significance of models. In the case of chemistry education it is argued that the present use of models is often not meaningful from the students' perspective. A strategy to overcome this problem is to use an authentic chemical modelling practice as a context for a curriculum…

  17. Selected comments on the ORNL Residential Energy-Use Model

    SciTech Connect

    Herbert, J.H.

    1980-06-01

    This report assesses critical technical aspects of the Oak Ridge National Laboratory (ORNL) Residential Energy Use Model. An important component of the ORNL Model is determination of the thermal performance of new equipment or structures. The examples presented here are illustrative of the type of analytic problems discovered in a detailed assessment of the model. A list of references is appended.

  18. An Efficient Bayesian Model Selection Approach for Interacting Quantitative Trait Loci Models With Many Effects

    PubMed Central

    Yi, Nengjun; Shriner, Daniel; Banerjee, Samprit; Mehta, Tapan; Pomp, Daniel; Yandell, Brian S.

    2007-01-01

    We extend our Bayesian model selection framework for mapping epistatic QTL in experimental crosses to include environmental effects and gene–environment interactions. We propose a new, fast Markov chain Monte Carlo algorithm to explore the posterior distribution of unknowns. In addition, we take advantage of any prior knowledge about genetic architecture to increase posterior probability on more probable models. These enhancements have significant computational advantages in models with many effects. We illustrate the proposed method by detecting new epistatic and gene–sex interactions for obesity-related traits in two real data sets of mice. Our method has been implemented in the freely available package R/qtlbim (http://www.qtlbim.org) to facilitate the general usage of the Bayesian methodology for genomewide interacting QTL analysis. PMID:17483424

  19. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  20. Fuel model selection for BEHAVE in midwestern oak savannas

    USGS Publications Warehouse

    Grabner, K.W.; Dwyer, J.P.; Cutter, B.E.

    2001-01-01

    BEHAVE, a fire behavior prediction system, can be a useful tool for managing areas with prescribed fire. However, the proper choice of fuel models can be critical in developing management scenarios. BEHAVE predictions were evaluated using four standardized fuel models that partially described oak savanna fuel conditions: Fuel Model 1 (Short Grass), 2 (Timber and Grass), 3 (Tall Grass), and 9 (Hardwood Litter). Although all four models yielded regressions with R2 in excess of 0.8, Fuel Model 2 produced the most reliable fire behavior predictions.

  1. Bayesian model selection validates a biokinetic model for zirconium processing in humans

    PubMed Central

    2012-01-01

    Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152

  2. NEW MDS AND CLUSTERING BASED ALGORITHMS FOR PROTEIN MODEL QUALITY ASSESSMENT AND SELECTION

    PubMed Central

    WANG, QINGGUO; SHANG, CHARLES; XU, DONG

    2014-01-01

    In protein tertiary structure prediction, assessing the quality of predicted models is an essential task. Over the past years, many methods have been proposed for the protein model quality assessment (QA) and selection problem. Despite significant advances, the discerning power of current methods is still unsatisfactory. In this paper, we propose two new algorithms, CC-Select and MDS-QA, based on multidimensional scaling and k-means clustering. For the model selection problem, CC-Select combines consensus with clustering techniques to select the best models from a given pool. Given a set of predicted models, CC-Select first calculates a consensus score for each structure based on its average pairwise structural similarity to other models. Then, similar structures are grouped into clusters using multidimensional scaling and clustering algorithms. In each cluster, the one with the highest consensus score is selected as a candidate model. For the QA problem, MDS-QA combines single-model scoring functions with consensus to determine more accurate assessment score for every model in a given pool. Using extensive benchmark sets of a large collection of predicted models, we compare the two algorithms with existing state-of-the-art quality assessment methods and show significant improvement. PMID:24808625

  3. NEW MDS AND CLUSTERING BASED ALGORITHMS FOR PROTEIN MODEL QUALITY ASSESSMENT AND SELECTION.

    PubMed

    Wang, Qingguo; Shang, Charles; Xu, Dong; Shang, Yi

    2013-10-25

    In protein tertiary structure prediction, assessing the quality of predicted models is an essential task. Over the past years, many methods have been proposed for the protein model quality assessment (QA) and selection problem. Despite significant advances, the discerning power of current methods is still unsatisfactory. In this paper, we propose two new algorithms, CC-Select and MDS-QA, based on multidimensional scaling and k-means clustering. For the model selection problem, CC-Select combines consensus with clustering techniques to select the best models from a given pool. Given a set of predicted models, CC-Select first calculates a consensus score for each structure based on its average pairwise structural similarity to other models. Then, similar structures are grouped into clusters using multidimensional scaling and clustering algorithms. In each cluster, the one with the highest consensus score is selected as a candidate model. For the QA problem, MDS-QA combines single-model scoring functions with consensus to determine more accurate assessment score for every model in a given pool. Using extensive benchmark sets of a large collection of predicted models, we compare the two algorithms with existing state-of-the-art quality assessment methods and show significant improvement. PMID:24808625

  4. Diagnosing Hybrid Systems: a Bayesian Model Selection Approach

    NASA Technical Reports Server (NTRS)

    McIlraith, Sheila A.

    2005-01-01

    In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.

  5. Item Response Models for Examinee-Selected Items

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei

    2012-01-01

    In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…

  6. Default Bayes Factors for Model Selection in Regression

    ERIC Educational Resources Information Center

    Rouder, Jeffrey N.; Morey, Richard D.

    2012-01-01

    In this article, we present a Bayes factor solution for inference in multiple regression. Bayes factors are principled measures of the relative evidence from data for various models or positions, including models that embed null hypotheses. In this regard, they may be used to state positive evidence for a lack of an effect, which is not possible…

  7. Beyond the List: Schools Selecting Alternative CSR Models.

    ERIC Educational Resources Information Center

    Clark, Gail; Apthorp, Helen; Van Buhler, Rebecca; Dean, Ceri; Barley, Zoe

    A study was conducted to describe the population of alternative models for comprehensive school reform in the region served by Mid-continent Research for Education and Learning (McREL). The study addressed the questions of whether schools that did not propose to adopt widely known or implemented reform models were able to design a reform process…

  8. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  9. Achieving runtime adaptability through automated model evolution and variant selection

    NASA Astrophysics Data System (ADS)

    Mosincat, Adina; Binder, Walter; Jazayeri, Mehdi

    2014-01-01

    Dynamically adaptive systems propose adaptation by means of variants that are specified in the system model at design time and allow for a fixed set of different runtime configurations. However, in a dynamic environment, unanticipated changes may result in the inability of the system to meet its quality requirements. To allow the system to react to these changes, this article proposes a solution for automatically evolving the system model by integrating new variants and periodically validating the existing ones based on updated quality parameters. To illustrate this approach, the article presents a BPEL-based framework using a service composition model to represent the functional requirements of the system. The framework estimates quality of service (QoS) values based on information provided by a monitoring mechanism, ensuring that changes in QoS are reflected in the system model. The article shows how the evolved model can be used at runtime to increase the system's autonomic capabilities and delivered QoS.

  10. Demographic modeling of selected fish species with RAMAS

    SciTech Connect

    Saila, S.; Martin, B.; Ferson, S.; Ginzburg, L.; Millstein, J. )

    1991-03-01

    The microcomputer program RAMAS 3 developed for EPRI, has been used to model the intrinsic natural variability of seven important fish species: cod, Atlantic herring, yellowtail flounder, haddock, striped bass, American shad and white perch. Demographic data used to construct age-based population models included information on spawning biology, longevity, sex ratio and (age-specific) mortality and fecundity. These data were collected from published and unpublished sources. The natural risks of extinction and of falling below threshold population abundances (quasi-extinction) are derived for each of the seven fish species based on measured and estimated values for their demographic parameters. The analysis of these species provides evidence that including density-dependent compensation in the demographic model typically lowers the expected chance of extinction. This is because if density dependence generally acts as a restoring force it seems reasonable to conclude that models which include density dependence would exhibit less fluctuation than models without compensation since density-dependent populations experience a pull towards equilibrium. Since extinction probabilities are determined by the size of the fluctuation of population abundance, models without density dependence will show higher risks of extinction, given identical circumstances. Thus, models without compensation can be used as conservative estimators of risk, that is, if a compensation-free model yields acceptable extinction risk, adding compensation will not increase this risk. Since it is usually difficult to estimate the parameters needed for a model with compensation, such conservative estimates of the risks of extinction based on a model without compensation are very useful in the methodology of impact assessment. 103 refs., 19 figs., 10 tabs.

  11. A Free-Knot Spline Modeling Framework for Piecewise Linear Logistic Regression in Complex Samples with Body Mass Index and Mortality as an Example

    PubMed Central

    Keith, Scott W.; Allison, David B.

    2014-01-01

    This paper details the design, evaluation, and implementation of a framework for detecting and modeling non-linearity between a binary outcome and a continuous predictor variable adjusted for covariates in complex samples. The framework provides familiar-looking parameterizations of output in terms of linear slope coefficients and odds ratios. Estimation methods focus on maximum likelihood optimization of piecewise linear free-knot splines formulated as B-splines. Correctly specifying the optimal number and positions of the knots improves the model, but is marked by computational intensity and numerical instability. Our inference methods utilize both parametric and non-parametric bootstrapping. Unlike other non-linear modeling packages, this framework is designed to incorporate multistage survey sample designs common to nationally representative datasets. We illustrate the approach and evaluate its performance in specifying the correct number of knots under various conditions with an example using body mass index (BMI, kg/m2) and the complex multistage sampling design from the Third National Health and Nutrition Examination Survey to simulate binary mortality outcomes data having realistic non-linear sample-weighted risk associations with BMI. BMI and mortality data provide a particularly apt example and area of application since BMI is commonly recorded in large health surveys with complex designs, often categorized for modeling, and non-linearly related to mortality. When complex sample design considerations were ignored, our method was generally similar to or more accurate than two common model selection procedures, Schwarz’s Bayesian Information Criterion (BIC) and Akaike’s Information Criterion (AIC), in terms of correctly selecting the correct number of knots. Our approach provided accurate knot selections when complex sampling weights were incorporated, while AIC and BIC were not effective under these conditions. PMID:25610831

  12. MATHEMATICAL MODEL FOR THE SELECTIVE DEPOSITION OF INHALED PHARMACEUTICALS

    EPA Science Inventory

    To accurately assess the potential therapeutic effects of airborne drugs, the deposition sites of inhaled particles must be known. erein, an original theory is presented for physiologically based pharmacokinetic modeling and related prophylaxis of airway diseases. he mathematical...

  13. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  14. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness. PMID:25034338

  15. On Large Time Behavior and Selection Principle for a Diffusive Carr-Penrose Model

    NASA Astrophysics Data System (ADS)

    Conlon, Joseph G.; Dabkowski, Michael; Wu, Jingchen

    2016-04-01

    This paper is concerned with the study of a diffusive perturbation of the linear LSW model introduced by Carr and Penrose. A main subject of interest is to understand how the presence of diffusion acts as a selection principle, which singles out a particular self-similar solution of the linear LSW model as determining the large time behavior of the diffusive model. A selection principle is rigorously proven for a model which is a semiclassical approximation to the diffusive model. Upper bounds on the rate of coarsening are also obtained for the full diffusive model.

  16. Turbulence Model Selection for Low Reynolds Number Flows

    PubMed Central

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil’s surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail. PMID:27104354

  17. Catalog of selected heavy duty transport energy management models

    NASA Technical Reports Server (NTRS)

    Colello, R. G.; Boghani, A. B.; Gardella, N. C.; Gott, P. G.; Lee, W. D.; Pollak, E. C.; Teagan, W. P.; Thomas, R. G.; Snyder, C. M.; Wilson, R. P., Jr.

    1983-01-01

    A catalog of energy management models for heavy duty transport systems powered by diesel engines is presented. The catalog results from a literature survey, supplemented by telephone interviews and mailed questionnaires to discover the major computer models currently used in the transportation industry in the following categories: heavy duty transport systems, which consist of highway (vehicle simulation), marine (ship simulation), rail (locomotive simulation), and pipeline (pumping station simulation); and heavy duty diesel engines, which involve models that match the intake/exhaust system to the engine, fuel efficiency, emissions, combustion chamber shape, fuel injection system, heat transfer, intake/exhaust system, operating performance, and waste heat utilization devices, i.e., turbocharger, bottoming cycle.

  18. Turbulence Model Selection for Low Reynolds Number Flows.

    PubMed

    Aftab, S M A; Mohd Rafie, A S; Razak, N A; Ahmad, K A

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil's surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail. PMID:27104354

  19. Sexual selection under parental choice: a revision to the model.

    PubMed

    Apostolou, Menelaos

    2014-06-01

    Across human cultures, parents exercise considerable influence over their children's mate choices. The model of parental choice provides a good account of these patterns, but its prediction that male parents exercise more control than female ones is not well founded in evolutionary theory. To address this shortcoming, the present article proposes a revision to the model. In particular, parental uncertainty, residual reproductive value, reproductive variance, asymmetry in the control of resources, physical strength, and access to weaponry make control over mating more profitable for male parents than female ones; in turn, this produces an asymmetrical incentive for controlling mate choice. Several implications of this formulation are also explored. PMID:24474549

  20. A Learner Support Model Based on Peer Tutor Selection

    ERIC Educational Resources Information Center

    van Rosmalen, P.; Sloep, P.; Kester, L.; Brouns, F.; de Croock, M.; Pannekeet, K.; Koper, R.

    2008-01-01

    The introduction of elearning often leads to an increase in the time staff spends on tutoring. To alleviate the workload of staff tutors, we developed a model for organizing and supporting learner-related interactions in elearning systems. It makes use of the knowledge and experience of peers and builds on the assumption that (lifelong) learners,…

  1. A Data Envelopment Analysis Model for Renewable Energy Technology Selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Public and media interest in alternative energy sources, such as renewable fuels, has rapidly increased in recent years due to higher prices for oil and natural gas. However, the current body of research providing comparative decision making models that either rank these alternative energy sources a...

  2. Bayesian Variable Selection in Multilevel Item Response Theory Models with Application in Genomics.

    PubMed

    Fragoso, Tiago M; de Andrade, Mariza; Pereira, Alexandre C; Rosa, Guilherme J M; Soler, Júlia M P

    2016-04-01

    The goal of this paper is to present an implementation of stochastic search variable selection (SSVS) to multilevel model from item response theory (IRT). As experimental settings get more complex and models are required to integrate multiple (and sometimes massive) sources of information, a model that can jointly summarize and select the most relevant characteristics can provide better interpretation and a deeper insight into the problem. A multilevel IRT model recently proposed in the literature for modeling multifactorial diseases is extended to perform variable selection in the presence of thousands of covariates using SSVS. We derive conditional distributions required for such a task as well as an acceptance-rejection step that allows for the SSVS in high dimensional settings using a Markov Chain Monte Carlo algorithm. We validate the variable selection procedure through simulation studies, and illustrate its application on a study with genetic markers associated with the metabolic syndrome. PMID:27027518

  3. On Numerical Aspects of Bayesian Model Selection in High and Ultrahigh-dimensional Settings

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    This article examines the convergence properties of a Bayesian model selection procedure based on a non-local prior density in ultrahigh-dimensional settings. The performance of the model selection procedure is also compared to popular penalized likelihood methods. Coupling diagnostics are used to bound the total variation distance between iterates in an Markov chain Monte Carlo (MCMC) algorithm and the posterior distribution on the model space. In several simulation scenarios in which the number of observations exceeds 100, rapid convergence and high accuracy of the Bayesian procedure is demonstrated. Conversely, the coupling diagnostics are successful in diagnosing lack of convergence in several scenarios for which the number of observations is less than 100. The accuracy of the Bayesian model selection procedure in identifying high probability models is shown to be comparable to commonly used penalized likelihood methods, including extensions of smoothly clipped absolute deviations (SCAD) and least absolute shrinkage and selection operator (LASSO) procedures. PMID:24683431

  4. Testing goodness of fit of parametric models for censored data.

    PubMed

    Nysen, Ruth; Aerts, Marc; Faes, Christel

    2012-09-20

    We propose and study a goodness-of-fit test for left-censored, right-censored, and interval-censored data assuming random censorship. Main motivation comes from dietary exposure assessment in chemical risk assessment, where the determination of an appropriate distribution for concentration data is of major importance. We base the new goodness-of-fit test procedure proposed in this paper on the order selection test. As part of the testing procedure, we extend the null model to a series of nested alternative models for censored data. Then, we use a modified AIC model selection to select the best model to describe the data. If a model with one or more extra parameters is selected, then we reject the null hypothesis. As an alternative to the use of the asymptotic null distribution of the test statistic, we define a bootstrap-based procedure. We illustrate the applicability of the test procedure on data of cadmium concentrations and on data from the Signal Tandmobiel study and demonstrate its performance characteristics through simulation studies. PMID:22714389

  5. Behavior changes in SIS STD models with selective mixing

    SciTech Connect

    Hyman, J.M.; Li, J.

    1997-08-01

    The authors propose and analyze a heterogeneous, multigroup, susceptible-infective-susceptible (SIS) sexually transmitted disease (STD) model where the desirability and acceptability in partnership formations are functions of the infected individuals. They derive explicit formulas for the epidemic thresholds, prove the existence and uniqueness of the equilibrium states for the two-group model and provide a complete analysis of their local and global stability. The authors then investigate the effects of behavior changes on the transmission dynamics and analyze the sensitivity of the epidemic to the magnitude of the behavior changes. They verify that if people modify their behavior to reduce the probability of infection with individuals in highly infected groups, through either reduced contacts, reduced partner formations, or using safe sex, the infection level may be decreased. However, if people continue to have intragroup and intergroup partnerships, then changing the desirability and acceptability formation cannot eradicate the epidemic once it exceeds the epidemic threshold.

  6. Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Li, Jun

    In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.

  7. Parameter selection and testing the soil water model SOIL

    NASA Astrophysics Data System (ADS)

    McGechan, M. B.; Graham, R.; Vinten, A. J. A.; Douglas, J. T.; Hooda, P. S.

    1997-08-01

    The soil water and heat simulation model SOIL was tested for its suitability to study the processes of transport of water in soil. Required parameters, particularly soil hydraulic parameters, were determined by field and laboratory tests for some common soil types and for soils subjected to contrasting treatments of long-term grassland and tilled land under cereal crops. Outputs from simulations were shown to be in reasonable agreement with independently measured field drain outflows and soil water content histories.

  8. Model selection for identifying power-law scaling.

    PubMed

    Ton, Robert; Daffertshofer, Andreas

    2016-08-01

    Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present. PMID:26774613

  9. Selective Cooperation in Early Childhood - How to Choose Models and Partners.

    PubMed

    Hermes, Jonas; Behne, Tanya; Studte, Kristin; Zeyen, Anna-Maria; Gräfenhain, Maria; Rakoczy, Hannes

    2016-01-01

    Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models) on a rational basis. The present study investigated whether children (and adults) also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64) and adults (N = 14) saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects). Participants then performed different tasks (cooperative problem solving and word learning) requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks), but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks). Young children's selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks. PMID:27505043

  10. Selective Cooperation in Early Childhood – How to Choose Models and Partners

    PubMed Central

    Hermes, Jonas; Behne, Tanya; Studte, Kristin; Zeyen, Anna-Maria; Gräfenhain, Maria; Rakoczy, Hannes

    2016-01-01

    Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models) on a rational basis. The present study investigated whether children (and adults) also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64) and adults (N = 14) saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects). Participants then performed different tasks (cooperative problem solving and word learning) requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks), but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks). Young children’s selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks. PMID:27505043

  11. Estimates of live-tree carbon stores in the Pacific Northwest are sensitive to model selection

    PubMed Central

    2011-01-01

    Background Estimates of live-tree carbon stores are influenced by numerous uncertainties. One of them is model-selection uncertainty: one has to choose among multiple empirical equations and conversion factors that can be plausibly justified as locally applicable to calculate the carbon store from inventory measurements such as tree height and diameter at breast height (DBH). Here we quantify the model-selection uncertainty for the five most numerous tree species in six counties of northwest Oregon, USA. Results The results of our study demonstrate that model-selection error may introduce 20 to 40% uncertainty into a live-tree carbon estimate, possibly making this form of error the largest source of uncertainty in estimation of live-tree carbon stores. The effect of model selection could be even greater if models are applied beyond the height and DBH ranges for which they were developed. Conclusions Model-selection uncertainty is potentially large enough that it could limit the ability to track forest carbon with the precision and accuracy required by carbon accounting protocols. Without local validation based on detailed measurements of usually destructively sampled trees, it is very difficult to choose the best model when there are several available. Our analysis suggests that considering tree form in equation selection may better match trees to existing equations and that substantial gaps exist, in terms of both species and diameter ranges, that are ripe for new model-building effort. PMID:21477353

  12. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    PubMed

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. PMID:25345575

  13. Mutation-selection models of coding sequence evolution with site-heterogeneous amino acid fitness profiles.

    PubMed

    Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas

    2010-03-01

    Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949

  14. Model selection for a medical diagnostic decision support system: a breast cancer detection case.

    PubMed

    West, D; West, V

    2000-11-01

    There are a number of different quantitative models that can be used in a medical diagnostic decision support system (MDSS) including parametric methods (linear discriminant analysis or logistic regression), non-parametric models (K nearest neighbor, or kernel density) and several neural network models. The complexity of the diagnostic task is thought to be one of the prime determinants of model selection. Unfortunately, there is no theory available to guide model selection. Practitioners are left to either choose a favorite model or to test a small subset using cross validation methods. This paper illustrates the use of a self-organizing map (SOM) to guide model selection for a breast cancer MDSS. The topological ordering properties of the SOM are used to define targets for an ideal accuracy level similar to a Bayes optimal level. These targets can then be used in model selection, variable reduction, parameter determination, and to assess the adequacy of the clinical measurement system. These ideas are applied to a successful model selection for a real-world breast cancer database. Diagnostic accuracy results are reported for individual models, for ensembles of neural networks, and for stacked predictors. PMID:10998586

  15. Model selection using multivariate functional data analysis for fast uncertainty quantification in subsurface reservoir forecasting

    NASA Astrophysics Data System (ADS)

    Grujic, O.; Caers, J.

    2014-12-01

    Modern approaches to uncertainty quantification in the subsurface rely on complex procedures of geological modeling combined with numerical simulation of flow & transport. This approach requires long computational times rendering any full Monte Carlo simulation infeasible, in particular solving the flow & transport problem takes hours of computing time in real field problems. This motivated the development of model selection methods aiming to identify a small subset of models that capture important statistics of a larger ensemble of geological model realization. A recent method, based on model selection in metric space, termed distance-kernel method (DKM) allows selecting representative models though kernel k-medoid clustering. The distance defining the metric space is usually based on some approximate flow model. However, the output of an approximate flow model can be multi-variate (reporting heads/pressures, saturation, rates). In addition, the modeler may have information from several other approximate models (e.g. upscaled models) or summary statistical information about geological heterogeneity that could allow for a more accurate selection. In an effort to perform model selection based on multivariate attributes, we rely on functional data analysis which allows for an exploitation of covariances between time-varying multivariate numerical simulation output. Based on mixed functional principal component analysis, we construct a lower dimensional space in which kernel k-medoid clustering is used for model selection. In this work we demonstrate the functional approach on a complex compositional flow problem where the geological uncertainty consists of channels with uncertain spatial distribution of facies, proportions, orientations and geometries. We illustrate that using multivariate attributes and multiple approximate models provides accuracy improvement over using a single attribute.

  16. Lessons for neurotoxicology from selected model compounds: SGOMSEC joint report.

    PubMed Central

    Rice, D C; Evangelista de Duffard, A M; Duffard, R; Iregren, A; Satoh, H; Watanabe, C

    1996-01-01

    The ability to identify potential neurotoxicants depends upon the characteristics of our test instruments. The neurotoxic properties of lead, methylmercury, polychlorinated biphenyls, and organic solvents would all have been detected at some dose level by tests in current use, provided that the doses were high enough and administered at an appropriate time such as during gestation. The adequacy of animal studies, particularly rodent studies, to predict intake levels at which human health can be protected is disappointing, however. It is unlikely that the use of advanced behavioral methodology would alleviate the apparent lack of sensitivity of the rodent model for many agents. PMID:8860323

  17. Causal Inference and Model Selection in Complex Settings

    NASA Astrophysics Data System (ADS)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  18. SELECTION OF CANDIDATE EUTROPHICATION MODELS FOR TOTAL MAXIMUM DAILY LOADS ANALYSES

    EPA Science Inventory

    A tiered approach was developed to evaluate candidate eutrophication models to select a common suite of models that could be used for Total Maximum Daily Loads (TMDL) analyses in estuaries, rivers, and lakes/reservoirs. Consideration for linkage to watershed models and ecologica...

  19. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  20. Modeling Network Intrusion Detection System Using Feature Selection and Parameters Optimization

    NASA Astrophysics Data System (ADS)

    Kim, Dong Seong; Park, Jong Sou

    Previous approaches for modeling Intrusion Detection System (IDS) have been on twofold: improving detection model(s) in terms of (i) feature selection of audit data through wrapper and filter methods and (ii) parameters optimization of detection model design, based on classification, clustering algorithms, etc. In this paper, we present three approaches to model IDS in the context of feature selection and parameters optimization: First, we present Fusion of Genetic Algorithm (GA) and Support Vector Machines (SVM) (FuGAS), which employs combinations of GA and SVM through genetic operation and it is capable of building an optimal detection model with only selected important features and optimal parameters value. Second, we present Correlation-based Hybrid Feature Selection (CoHyFS), which utilizes a filter method in conjunction of GA for feature selection in order to reduce long training time. Third, we present Simultaneous Intrinsic Model Identification (SIMI), which adopts Random Forest (RF) and shows better intrusion detection rates and feature selection results, along with no additional computational overheads. We show the experimental results and analysis of three approaches on KDD 1999 intrusion detection datasets.

  1. Impacts of selected dietary polyphenols on caramelization in model systems.

    PubMed

    Zhang, Xinchen; Chen, Feng; Wang, Mingfu

    2013-12-15

    This study investigated the impacts of six dietary polyphenols (phloretin, naringenin, quercetin, epicatechin, chlorogenic acid and rosmarinic acid) on fructose caramelization in thermal model systems at either neutral or alkaline pH. These polyphenols were found to increase the browning intensity and antioxidant capacity of caramel. The chemical reactions in the system of sugar and polyphenol, which include formation of polyphenol-sugar adducts, were found to be partially responsible for the formation of brown pigments and heat-induced antioxidants based on instrumental analysis. In addition, rosmarinic acid was demonstrated to significantly inhibit the formation of 5-hydroxymethylfurfural (HMF). Thus this research added to the efforts of controlling caramelization by dietary polyphenols under thermal condition, and provided some evidence to propose dietary polyphenols as functional ingredients to modify the caramel colour and bioactivity as well as to lower the amount of heat-induced contaminants such as 5-hydroxymethylfurfural (HMF). PMID:23993506

  2. Shape Selection in the non-Euclidean Model of Elasticity

    NASA Astrophysics Data System (ADS)

    Gemmer, John

    In this dissertation we investigate the behavior of radially symmetric non-Euclidean plates of thickness t with constant negative Gaussian curvature. We present a complete study of these plates using the Foppl-von Karman and Kirchhoff reduced theories of elasticity. Motivated by experimental results, we focus on deformations with a periodic profile. For the Foppl-von Karman model, we prove rigorously that minimizers of the elastic energy converge to saddle shaped isometric immersions. In studying this convergence, we prove rigorous upper and lower bounds for the energy that scale like the thickness t squared. Furthermore, for deformation with n-waves we prove that the lower bound scales like nt2 while the upper bound scales like n2t2. We also investigate the scaling with thickness of boundary layers where the stretching energy is concentrated with decreasing thickness. For the Kichhoff model, we investigate isometric immersions of disks with constant negative curvature into R2, and the minimizers for the bending energy, i.e. the L2 norm of the principal curvatures over the class of W2,2 isometric immersions. We show the existence of smooth immersions of arbitrarily large geodesic balls in the hyperbolic plane into Euclidean space. In elucidating the connection between these immersions and the non-existence/singularity results of Hilbert and Amsler, we obtain a lower bound for the L infinity norm of the principal curvatures for such smooth isometric immersions. We also construct piecewise smooth isometric immersions that have a periodic profile, are globally W2,2, and numerically have lower bending energy than their smooth counterparts. The number of periods in these configurations is set by the condition that the principal curvatures of the surface remain finite and grow approximately exponentially with the radius of the disc.

  3. Selection of a Tritium Dose Model: Defensibility and Reasonableness for DOE Authorization Basis Calculations

    SciTech Connect

    Blanchard, A.; O`Kula, K.R.; East, J.M.

    1998-06-01

    This paper highlights the logic used to select a dispersion/consequence methodology, describes the collection of tritium models contained in the suite of analysis options (the `tool kit`), and provides application examples.

  4. A physically based model for dielectric charging in an integrated optical MEMS wavelength selective switch.

    SciTech Connect

    Nielson, Gregory N.; Barbastathis, George

    2005-07-01

    A physical parameter based model for dielectric charge accumulation is proposed and used to predict the displacement versus applied voltage and pull-in response of an electrostatic MEMS wavelength selective integrated optical switch.

  5. Modeling the effect of selection history on pop-out visual search.

    PubMed

    Tseng, Yuan-Chi; Glaser, Joshua I; Caddigan, Eamon; Lleras, Alejandro

    2014-01-01

    While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

  6. Traditional and robust vector selection methods for use with similarity based models

    SciTech Connect

    Hines, J. W.; Garvey, D. R.

    2006-07-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  7. Generative model selection using a scalable and size-independent complex network classifier

    SciTech Connect

    Motallebi, Sadegh Aliakbary, Sadegh Habibi, Jafar

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.

  8. Alternating evolutionary pressure in a genetic algorithm facilitates protein model selection

    PubMed Central

    Offman, Marc N; Tournier, Alexander L; Bates, Paul A

    2008-01-01

    Background Automatic protein modelling pipelines are becoming ever more accurate; this has come hand in hand with an increasingly complicated interplay between all components involved. Nevertheless, there are still potential improvements to be made in template selection, refinement and protein model selection. Results In the context of an automatic modelling pipeline, we analysed each step separately, revealing several non-intuitive trends and explored a new strategy for protein conformation sampling using Genetic Algorithms (GA). We apply the concept of alternating evolutionary pressure (AEP), i.e. intermediate rounds within the GA runs where unrestrained, linear growth of the model populations is allowed. Conclusion This approach improves the overall performance of the GA by allowing models to overcome local energy barriers. AEP enabled the selection of the best models in 40% of all targets; compared to 25% for a normal GA. PMID:18673557

  9. Bayesian Model Selection in Complex Linear Systems, as Illustrated in Genetic Association Studies

    PubMed Central

    Wen, Xiaoquan

    2013-01-01

    Summary Motivated by examples from genetic association studies, this paper considers the model selection problem in a general complex linear model system and in a Bayesian framework. We discuss formulating model selection problems and incorporating context-dependent a priori information through different levels of prior specifications. We also derive analytic Bayes factors and their approximations to facilitate model selection and discuss their theoretical and computational properties. We demonstrate our Bayesian approach based on an implemented Markov Chain Monte Carlo (MCMC) algorithm in simulations and a real data application of mapping tissue-specific eQTLs. Our novel results on Bayes factors provide a general framework to perform efficient model comparisons in complex linear model systems. PMID:24350677

  10. Extended EMF Models of Synchronous Reluctance Motors and Selection of Main Flux Direction

    NASA Astrophysics Data System (ADS)

    Ichikawa, Shinji; Tomita, Mutuwo; Doki, Shinji; Okuma, Shigeru; Fujiwara, Fumiharu

    A new mathematical model called an Extended EMF (EEMF) model and a sensorless control method using the concept for PMSMs have proposed by authors, and their availability have been verified by experiments. The purpose of this paper is to apply the EEMF model to sensorless control of synchronous reluctance motors. Since synchronous reluctance motors do not have any permanent magnet, a main flux direction of a motor model can be chosen in two ways. And the difference of the main flux direction leads to two EEMF models. Between two EEMF models, there is some difference from the point of the motor model for sensorless control. We indicate the difference of two EEMF models clearly and derive the difference of the position estimation error caused by deviation of inductance parameters. Moreover, the selection way of EEMF models is discussed. Finally, the selection method is verified by experiments.

  11. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology

    NASA Astrophysics Data System (ADS)

    Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.

    2012-04-01

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with

  12. Source-mask selection using computational lithography: further investigation incorporating rigorous resist models

    NASA Astrophysics Data System (ADS)

    Kapasi, Sanjay; Robertson, Stewart; Biafore, John; Smith, Mark D.

    2009-12-01

    Recent publications have emphasized the criticality of computational lithography in source-mask selection for 32 and 22 nm technology nodes. Lithographers often select the illuminator geometries based on analyzing aerial images for a limited set of structures using computational lithography tools. Last year, Biafore, et al1 demonstrated the divergence between aerial image models and resist models in computational lithography. In a follow-up study2, it was illustrated that optimal illuminator is different when selected based on resist model in contrast to aerial image model. In the study, optimal source shapes were evaluated for 1D logic patterns using aerial image model and two distinct commercial resist models. Physics based lumped parameter resist model (LPM) was used. Accurately calibrated full physical models are portable across imaging conditions compared to the lumped models. This study will be an extension of previous work. Full physical resist models (FPM) with calibrated resist parameters3,4,5,6 will be used in selecting optimum illumination geometries for 1D logic patterns. Several imaging parameters - like Numerical Aperture (NA), source geometries (Annular, Quadrupole, etc.), illumination configurations for different sizes and pitches will be explored in the study. Our goal is to compare and analyze the optimal source-shapes across various imaging conditions. In the end, the optimal source-mask solution for given set of designs based on all the models will be recommended.

  13. Balancing Selection in Species with Separate Sexes: Insights from Fisher’s Geometric Model

    PubMed Central

    Connallon, Tim; Clark, Andrew G.

    2014-01-01

    How common is balancing selection, and what fraction of phenotypic variance is attributable to balanced polymorphisms? Despite decades of research, answers to these questions remain elusive. Moreover, there is no clear theoretical prediction about the frequency with which balancing selection is expected to arise within a population. Here, we use an extension of Fisher’s geometric model of adaptation to predict the probability of balancing selection in a population with separate sexes, wherein polymorphism is potentially maintained by two forms of balancing selection: (1) heterozygote advantage, where heterozygous individuals at a locus have higher fitness than homozygous individuals, and (2) sexually antagonistic selection (a.k.a. intralocus sexual conflict), where the fitness of each sex is maximized by different genotypes at a locus. We show that balancing selection is common under biologically plausible conditions and that sex differences in selection or sex-by-genotype effects of mutations can each increase opportunities for balancing selection. Although heterozygote advantage and sexual antagonism represent alternative mechanisms for maintaining polymorphism, they mutually exist along a balancing selection continuum that depends on population and sex-specific parameters of selection and mutation. Sexual antagonism is the dominant mode of balancing selection across most of this continuum. PMID:24812306

  14. Selection of models in the problem of error prediction for navigation systems

    NASA Astrophysics Data System (ADS)

    Neusypin, K. A.; Pupkov, K. A.

    1991-07-01

    A selection criterion based on the use of a criterion of observability degree is proposed for inertial navigation system. Models are selected that have a maximum degree of observability, which is determined from the maximum value of the square of the determinant of the observability matrix or the observability Grammian in the nonstationary case. Model reduction is carried out using a numerical criterion for the observability degree.

  15. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  16. A model of face selection in viewing video stories

    PubMed Central

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  17. Models of Preconception Care Implementation in Selected Countries

    PubMed Central

    Lo, Sue Seen-Tsing; Zhuo, Jiatong; Han, Jung-Yeol; Delvoye, Pierre; Zhu, Li

    2006-01-01

    Globally, maternal and child health faces diverse challenges depending on the status of the development of the country. Some countries have introduced or explored preconception care for various reasons. Falling birth rates and increasing knowledge about risk factors for adverse pregnancy outcomes led to the introduction of preconception care in Hong Kong in 1998, and South Korea in 2004. In Hong Kong, comprehensive preconception care including laboratory tests are provided to over 4000 women each year at a cost of $75 per person. In Korea, about 60% of the women served have known medical risk history, and the challenge is to expand the program capacity to all women who plan pregnancy, and conducting social marketing. Belgium has established an ad hoc-committee to develop a comprehensive social marketing and professional training strategy for pilot testing preconception care models in the French speaking part of Belgium, an area that represents 5 million people and 50,000 births per year using prenatal care and pediatric clinics, gynecological departments, and the genetic centers. In China, Guangxi province piloted preconceptional HIV testing and counseling among couples who sought the then mandatory premarital medical examination as a component of the three-pronged approach to reduce mother to child transmission of HIV. HIV testing rates among couples increased from 38% to 62% over one year period. In October 2003, China changed the legal requirement of premarital medical examination from mandatory to “voluntary.” This change was interpreted by most women that the premarital health examination was “unnecessary” and overall premarital health examination rates dropped. Social marketing efforts piloted in 2004 indicated that 95% of women were willing to pay up to RMB 100 (US$12) for preconception health care services. These case studies illustrate programmatic feasibility of preconception care services to address maternal and child health and other public

  18. Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.

    PubMed

    Ueki, Masao; Tamiya, Gen

    2016-04-01

    We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods. PMID:26947266

  19. Do Bayesian Model Weights Tell the Whole Story? New Analysis and Optimal Design Tools for Maximum-Confidence Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, A.; Nowak, W.; Wöhling, T.

    2013-12-01

    Bayesian model averaging (BMA) combines the predictive capabilities of alternative conceptual models into a robust best estimate and allows the quantification of conceptual uncertainty. The individual models are weighted with their posterior probability according to Bayes' theorem. Despite this rigorous procedure, we see four obstacles to robust model ranking: (1) The weights inherit uncertainty related to measurement noise in the calibration data set, which may compromise the reliability of model ranking. (2) Posterior weights rank the models only relative to each other, but do not contain information about the absolute model performance. (3) There is a lack of objective methods to assess whether the suggested models are practically distinguishable or very similar to each other, i.e., whether the individual models explore different regions of the model space. (4) No theory for optimal design (OD) of experiments exists that explicitly aims at maximum-confidence model discrimination. The goal of our study is to overcome these four shortcomings. We determine the robustness of weights against measurement noise (1) by repeatedly perturbing the observed data with random measurement errors and analyzing the variability in the obtained weights. Realizing that model weights have a probability distribution of their own, we introduce an additional term into the overall prediction uncertainty analysis scheme which we call 'weighting uncertainty'. We further assess an 'absolute distance' in performance of the model set from the truth (2) as seen through the eyes of the data by interpreting statistics of Bayesian model evidence. This analysis is of great value for modellers to decide, if the modelling task can be satisfactorily carried out with the model(s) at hand, or if more effort should be invested in extending the set with better performing models. As a further prerequisite for robust model selection, we scrutinize the ability of BMA to distinguish between the models in

  20. Statistical Inference of Selection and Divergence from a Time-Dependent Poisson Random Field Model

    PubMed Central

    Amei, Amei; Sawyer, Stanley

    2012-01-01

    We apply a recently developed time-dependent Poisson random field model to aligned DNA sequences from two related biological species to estimate selection coefficients and divergence time. We use Markov chain Monte Carlo methods to estimate species divergence time and selection coefficients for each locus. The model assumes that the selective effects of non-synonymous mutations are normally distributed across genetic loci but constant within loci, and synonymous mutations are selectively neutral. In contrast with previous models, we do not assume that the individual species are at population equilibrium after divergence. Using a data set of 91 genes in two Drosophila species, D. melanogaster and D. simulans, we estimate the species divergence time (or 1.68 million years, assuming the haploid effective population size years) and a mean selection coefficient per generation . Although the average selection coefficient is positive, the magnitude of the selection is quite small. Results from numerical simulations are also presented as an accuracy check for the time-dependent model. PMID:22509300

  1. Transferability of optimally-selected climate models in the quantification of climate change impacts on hydrology

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe

    2016-02-01

    Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.

  2. Bankruptcy prediction using SVM models with a new approach to combine features selection and parameter optimisation

    NASA Astrophysics Data System (ADS)

    Zhou, Ligang; Keung Lai, Kin; Yen, Jerome

    2014-03-01

    Due to the economic significance of bankruptcy prediction of companies for financial institutions, investors and governments, many quantitative methods have been used to develop effective prediction models. Support vector machine (SVM), a powerful classification method, has been used for this task; however, the performance of SVM is sensitive to model form, parameter setting and features selection. In this study, a new approach based on direct search and features ranking technology is proposed to optimise features selection and parameter setting for 1-norm and least-squares SVM models for bankruptcy prediction. This approach is also compared to the SVM models with parameter optimisation and features selection by the popular genetic algorithm technique. The experimental results on a data set with 2010 instances show that the proposed models are good alternatives for bankruptcy prediction.

  3. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    SciTech Connect

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  4. Selecting Spatial Scale of Covariates in Regression Models of Environmental Exposures

    PubMed Central

    Grant, Lauren P.; Gennings, Chris; Wheeler, David C.

    2015-01-01

    Environmental factors or socioeconomic status variables used in regression models to explain environmental chemical exposures or health outcomes are often in practice modeled at the same buffer distance or spatial scale. In this paper, we present four model selection algorithms that select the best spatial scale for each buffer-based or area-level covariate. Contamination of drinking water by nitrate is a growing problem in agricultural areas of the United States, as ingested nitrate can lead to the endogenous formation of N-nitroso compounds, which are potent carcinogens. We applied our methods to model nitrate levels in private wells in Iowa. We found that environmental variables were selected at different spatial scales and that a model allowing spatial scale to vary across covariates provided the best goodness of fit. Our methods can be applied to investigate the association between environmental risk factors available at multiple spatial scales or buffer distances and measures of disease, including cancers. PMID:25983543

  5. Selecting spatial scale of covariates in regression models of environmental exposures.

    PubMed

    Grant, Lauren P; Gennings, Chris; Wheeler, David C

    2015-01-01

    Environmental factors or socioeconomic status variables used in regression models to explain environmental chemical exposures or health outcomes are often in practice modeled at the same buffer distance or spatial scale. In this paper, we present four model selection algorithms that select the best spatial scale for each buffer-based or area-level covariate. Contamination of drinking water by nitrate is a growing problem in agricultural areas of the United States, as ingested nitrate can lead to the endogenous formation of N-nitroso compounds, which are potent carcinogens. We applied our methods to model nitrate levels in private wells in Iowa. We found that environmental variables were selected at different spatial scales and that a model allowing spatial scale to vary across covariates provided the best goodness of fit. Our methods can be applied to investigate the association between environmental risk factors available at multiple spatial scales or buffer distances and measures of disease, including cancers. PMID:25983543

  6. Model selection in the weighted generalized estimating equations for longitudinal data with dropout.

    PubMed

    Gosho, Masahiko

    2016-05-01

    We propose criteria for variable selection in the mean model and for the selection of a working correlation structure in longitudinal data with dropout missingness using weighted generalized estimating equations. The proposed criteria are based on a weighted quasi-likelihood function and a penalty term. Our simulation results show that the proposed criteria frequently select the correct model in candidate mean models. The proposed criteria also have good performance in selecting the working correlation structure for binary and normal outcomes. We illustrate our approaches using two empirical examples. In the first example, we use data from a randomized double-blind study to test the cancer-preventing effects of beta carotene. In the second example, we use longitudinal CD4 count data from a randomized double-blind study. PMID:26509243

  7. Variable selection in Bayesian smoothing spline ANOVA models: Application to deterministic computer codes

    PubMed Central

    Reich, Brian J.; Storlie, Curtis B.; Bondell, Howard D.

    2009-01-01

    With many predictors, choosing an appropriate subset of the covariates is a crucial, and difficult, step in nonparametric regression. We propose a Bayesian nonparametric regression model for curve-fitting and variable selection. We use the smoothing spline ANOVA framework to decompose the regression function into interpretable main effect and interaction functions. Stochastic search variable selection via MCMC sampling is used to search for models that fit the data well. Also, we show that variable selection is highly-sensitive to hyperparameter choice and develop a technique to select hyperparameters that control the long-run false positive rate. The method is used to build an emulator for a complex computer model for two-phase fluid flow. PMID:19789732

  8. A model of two-way selection system for human behavior.

    PubMed

    Zhou, Bin; Qin, Shujia; Han, Xiao-Pu; He, Zhe; Xie, Jia-Rong; Wang, Bing-Hong

    2014-01-01

    Two-way selection is a common phenomenon in nature and society. It appears in the processes like choosing a mate between men and women, making contracts between job hunters and recruiters, and trading between buyers and sellers. In this paper, we propose a model of two-way selection system, and present its analytical solution for the expectation of successful matching total and the regular pattern that the matching rate trends toward an inverse proportion to either the ratio between the two sides or the ratio of the state total to the smaller group's people number. The proposed model is verified by empirical data of the matchmaking fairs. Results indicate that the model well predicts this typical real-world two-way selection behavior to the bounded error extent, thus it is helpful for understanding the dynamics mechanism of the real-world two-way selection system. PMID:24454687

  9. Recursive Random Forests Enable Better Predictive Performance and Model Interpretation than Variable Selection by LASSO.

    PubMed

    Zhu, Xiang-Wei; Xin, Yan-Jun; Ge, Hui-Lin

    2015-04-27

    Variable selection is of crucial significance in QSAR modeling since it increases the model predictive ability and reduces noise. The selection of the right variables is far more complicated than the development of predictive models. In this study, eight continuous and categorical data sets were employed to explore the applicability of two distinct variable selection methods random forests (RF) and least absolute shrinkage and selection operator (LASSO). Variable selection was performed: (1) by using recursive random forests to rule out a quarter of the least important descriptors at each iteration and (2) by using LASSO modeling with 10-fold inner cross-validation to tune its penalty λ for each data set. Along with regular statistical parameters of model performance, we proposed the highest pairwise correlation rate, average pairwise Pearson's correlation coefficient, and Tanimoto coefficient to evaluate the optimal by RF and LASSO in an extensive way. Results showed that variable selection could allow a tremendous reduction of noisy descriptors (at most 96% with RF method in this study) and apparently enhance model's predictive performance as well. Furthermore, random forests showed property of gathering important predictors without restricting their pairwise correlation, which is contrary to LASSO. The mutual exclusion of highly correlated variables in LASSO modeling tends to skip important variables that are highly related to response endpoints and thus undermine the model's predictive performance. The optimal variables selected by RF share low similarity with those by LASSO (e.g., the Tanimoto coefficients were smaller than 0.20 in seven out of eight data sets). We found that the differences between RF and LASSO predictive performances mainly resulted from the variables selected by different strategies rather than the learning algorithms. Our study showed that the right selection of variables is more important than the learning algorithm for modeling. We hope

  10. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  11. Evaluation of two outlier-detection-based methods for detecting tissue-selective genes from microarray data.

    PubMed

    Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro

    2007-01-01

    Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent's non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent's method is not suitable for ROKU. PMID:19936074

  12. DEVELOPMENT OF AN AGGREGATION AND EPISODE SELECTION SCHEME TO SUPPORT THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY MODEL

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation of use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700-hPa east-west and north-south...

  13. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    PubMed Central

    Indiveri, Giacomo

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  14. A hidden Markov model for investigating recent positive selection through haplotype structure.

    PubMed

    Chen, Hua; Hey, Jody; Slatkin, Montgomery

    2015-02-01

    Recent positive selection can increase the frequency of an advantageous mutant rapidly enough that a relatively long ancestral haplotype will be remained intact around it. We present a hidden Markov model (HMM) to identify such haplotype structures. With HMM identified haplotype structures, a population genetic model for the extent of ancestral haplotypes is then adopted for parameter inference of the selection intensity and the allele age. Simulations show that this method can detect selection under a wide range of conditions and has higher power than the existing frequency spectrum-based method. In addition, it provides good estimate of the selection coefficients and allele ages for strong selection. The method analyzes large data sets in a reasonable amount of running time. This method is applied to HapMap III data for a genome scan, and identifies a list of candidate regions putatively under recent positive selection. It is also applied to several genes known to be under recent positive selection, including the LCT, KITLG and TYRP1 genes in Northern Europeans, and OCA2 in East Asians, to estimate their allele ages and selection coefficients. PMID:25446961

  15. A Hidden Markov Model for Investigating Recent Positive Selection through Haplotype Structure

    PubMed Central

    Hey, Jody; Slatkin, Montgomery

    2014-01-01

    Recent positive selection can increase the frequency of an advantageous mutant rapidly enough that a relatively long ancestral haplotype will be remained intact around it. We present a hidden Markov model (HMM) to identify such haplotype structures. With HMM identified haplotype structures, a population genetic model for the extent of ancestral haplotypes is then adopted for parameter inference of the selection intensity and the allele age. Simulations show that this method can detect selection under a wide range of conditions and has higher power than the existing frequency spectrum-based method. In addition, it provides good estimate of the selection coefficients and allele ages for strong selection. The method analyzes large data sets in a reasonable amount of running time. This method is applied to HapMap III data for a genome scan, and identifies a list of candidate regions putatively under recent positive selection. It is also applied to several genes known to be under recent positive selection, including the LCT, KITLG and TYRP1 genes in Northern Europeans, and OCA2 in East Asians, to estimate their allele ages and selection coefficients. PMID:25446961

  16. Selecting and weighting spatial predictors for empirical modeling of landslide susceptibility in the Darjeeling Himalayas (India)

    NASA Astrophysics Data System (ADS)

    Ghosh, Saibal; Carranza, Emmanuel John M.; van Westen, Cees J.; Jetten, Victor G.; Bhattacharya, Dipendra N.

    2011-08-01

    In this paper, we created predictive models for assessing the susceptibility to shallow translational rocksliding and debris sliding in the Darjeeling Himalayas (India) by empirically selecting and weighting spatial predictors of landslides. We demonstrate a two-stage methodology: (1) quantifying associations of individual spatial factors with landslides of different types using bivariate analysis to select predictors; and (2) pairwise comparisons of the quantified associations using an analytical hierarchy process to assign predictor weights. We integrate the weighted spatial predictors through multi-class index overlay to derive predictive models of landslide susceptibility. The resultant model for shallow translational landsliding based on selected and weighted predictors outperforms those based on all weighted predictors or selected and unweighted predictors. Therefore, spatial factors with negative associations with landslides and unweighted predictors are ineffective in predictive modeling of landslide susceptibility. We also applied logistic regression to model landslide susceptibility, but some of the selected predictors are less realistic than those from our methodology, and our methodology gives better prediction rates. Although previous predictive models of landslide susceptibility indicate that multivariate analyses are superior to bivariate analyses, we demonstrate the benefit of the proposed methodology including bivariate analyses.

  17. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops

    PubMed Central

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an “island model” inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of

  18. The Performance of IRT Model Selection Methods with Mixed-Format Tests

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2012-01-01

    When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…

  19. Perturbation Selection and Local Influence Analysis for Nonlinear Structural Equation Model

    ERIC Educational Resources Information Center

    Chen, Fei; Zhu, Hong-Tu; Lee, Sik-Yum

    2009-01-01

    Local influence analysis is an important statistical method for studying the sensitivity of a proposed model to model inputs. One of its important issues is related to the appropriate choice of a perturbation vector. In this paper, we develop a general method to select an appropriate perturbation vector and a second-order local influence measure…

  20. Coursework Selection: A Frame of Reference Approach Using Structural Equation Modelling

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reuter, Martin; Hilling, Christine

    2005-01-01

    Background: Choice behaviour has far-reaching consequences on students' educational careers. Previous models on course selection for example, the model of achievement-related choices (Wigfield & Eccles, 2000) and of self-efficacy theory (Bandura, 1997), stress the importance of ability perceptions (self-concept of ability) as major determinants of…

  1. Selecting ELL Textbooks: A Content Analysis of Language-Teaching Models

    ERIC Educational Resources Information Center

    LaBelle, Jeffrey T.

    2011-01-01

    Many middle school teachers lack adequate criteria to critically select materials that represent a variety of L2 teaching models. This study analyzes the illustrated and written content of 33 ELL textbooks to determine the range of L2 teaching models represented. The researchers asked to what extent do middle school ELL texts depict frequency and…

  2. Model term selection for spatio-temporal system identification using mutual information

    NASA Astrophysics Data System (ADS)

    Wang, Shu; Wei, Hua-Liang; Coca, Daniel; Billings, Stephen A.

    2013-02-01

    A new mutual information based algorithm is introduced for term selection in spatio-temporal models. A generalised cross validation procedure is also introduced for model length determination and examples based on cellular automata, coupled map lattice and partial differential equations are described.

  3. AN AGGREGATION AND EPISODE SELECTION SCHEME FOR EPA'S MODELS-3 CMAQ

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation for use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700 hPa u and v wind field compo...

  4. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  5. Making good choices with variable information: a stochastic model for nest-site selection by honeybees.

    PubMed

    Perdriau, Benjamin S; Myerscough, Mary R

    2007-04-22

    A density-dependent Markov process model is constructed for information transfer among scouts during nest-site selection by honeybees (Apis mellifera). The effects of site quality, competition between sites and delays in site discovery are investigated. The model predicts that bees choose the better of two sites more reliably when both sites are of low quality than when both sites are of high quality and that delay in finding a second site has most effect on the final choice when both sites are of high quality. The model suggests that stochastic effects in honeybee nest-site selection confer no advantage on the swarm. PMID:17301012

  6. Mode-selective quantization and multimodal effective models for spherically layered systems

    NASA Astrophysics Data System (ADS)

    Dzsotjan, D.; Rousseaux, B.; Jauslin, H. R.; des Francs, G. Colas; Couteau, C.; Guérin, S.

    2016-08-01

    We propose a geometry-specific, mode-selective quantization scheme in coupled field-emitter systems which makes it easy to include material and geometrical properties, and intrinsic losses, as well as the positions of an arbitrary number of quantum emitters. The method is presented through the example of a spherically symmetric, nonmagnetic, arbitrarily layered system. We follow it up by a framework to project the system on simpler, effective cavity QED models. Maintaining a well-defined connection to the original quantization, we derive the emerging effective quantities from the full, mode-selective model in a mathematically consistent way. We discuss the uses and limitations of these effective models.

  7. Model selection forecasts for the spectral index from the Planck satellite

    SciTech Connect

    Pahud, Cedric; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David

    2006-06-15

    The recent WMAP3 results have placed measurements of the spectral index n{sub S} in an interesting position. While parameter estimation techniques indicate that the Harrison-Zel'dovich spectrum n{sub S}=1 is strongly excluded (in the absence of tensor perturbations), Bayesian model selection techniques reveal that the case against n{sub S}=1 is not yet conclusive. In this paper, we forecast the ability of the Planck satellite mission to use Bayesian model selection to convincingly exclude (or favor) the Harrison-Zel'dovich model.

  8. General kin selection models for genetic evolution of sib altruism in diploid and haplodiploid species.

    PubMed Central

    Levitt, P R

    1975-01-01

    A population genetic approach is presented for general analysis and comparison of kin selection models of sib and half-sib altruism. Nine models are described, each assuming a particular mode of inheritance, number of female inseminations, and Mendelian dominance of the altruist gene. In each model, the selective effects of altruism are described in terms of two general fitness functions, A(beta) and S(beta), giving respectively the expected fitness of an altruist and a nonaltruist as a function of the fraction of altruists beta in a given sibship. For each model, exact conditions are reported for stability at altruist and nonaltruist fixation. Under the Table 3 axions, the stability conditions may then be partially ordered on the basis of implications holding between pairs of conditions. The partial orderings are compared with predictions of the kin selection theory of Hamilton. PMID:1060136

  9. General kin selection models for genetic evolution of sib altruism in diploid and haplodiploid species.

    PubMed

    Levitt, P R

    1975-11-01

    A population genetic approach is presented for general analysis and comparison of kin selection models of sib and half-sib altruism. Nine models are described, each assuming a particular mode of inheritance, number of female inseminations, and Mendelian dominance of the altruist gene. In each model, the selective effects of altruism are described in terms of two general fitness functions, A(beta) and S(beta), giving respectively the expected fitness of an altruist and a nonaltruist as a function of the fraction of altruists beta in a given sibship. For each model, exact conditions are reported for stability at altruist and nonaltruist fixation. Under the Table 3 axions, the stability conditions may then be partially ordered on the basis of implications holding between pairs of conditions. The partial orderings are compared with predictions of the kin selection theory of Hamilton. PMID:1060136

  10. Real-world datasets for portfolio selection and solutions of some stochastic dominance portfolio models.

    PubMed

    Bruni, Renato; Cesarone, Francesco; Scozzari, Andrea; Tardella, Fabio

    2016-09-01

    A large number of portfolio selection models have appeared in the literature since the pioneering work of Markowitz. However, even when computational and empirical results are described, they are often hard to replicate and compare due to the unavailability of the datasets used in the experiments. We provide here several datasets for portfolio selection generated using real-world price values from several major stock markets. The datasets contain weekly return values, adjusted for dividends and for stock splits, which are cleaned from errors as much as possible. The datasets are available in different formats, and can be used as benchmarks for testing the performances of portfolio selection models and for comparing the efficiency of the algorithms used to solve them. We also provide, for these datasets, the portfolios obtained by several selection strategies based on Stochastic Dominance models (see "On Exact and Approximate Stochastic Dominance Strategies for Portfolio Selection" (Bruni et al. [2])). We believe that testing portfolio models on publicly available datasets greatly simplifies the comparison of the different portfolio selection strategies. PMID:27508232

  11. Variable Selection for Propensity Score Models When Estimating Treatment Effects on Multiple Outcomes: a Simulation Study

    PubMed Central

    Wyss, Richard; Girman, Cynthia J.; LoCasale, Robert J.; Brookhart, M. Alan; Stürmer, Til

    2012-01-01

    Purpose It is often preferable to simplify the estimation of treatment effects on multiple outcomes by using a single propensity score (PS) model. Variable selection in PS models impacts the efficiency and validity of treatment effects. However, the impact of different variable selection strategies on the estimated treatment effects in settings involving multiple outcomes is not well understood. The authors use simulations to evaluate the impact of different variable selection strategies on the bias and precision of effect estimates to provide insight into the performance of various PS models in settings with multiple outcomes. Methods Simulated studies consisted of dichotomous treatment, two Poisson outcomes, and eight standard-normal covariates. Covariates were selected for the PS models based on their effects on treatment, a specific outcome, or both outcomes. The PSs were implemented using stratification, matching, and weighting (IPTW). Results PS models including only covariates affecting a specific outcome (outcome-specific models) resulted in the most efficient effect estimates. The PS model that only included covariates affecting either outcome (generic-outcome model) performed best among the models that simultaneously controlled measured confounding for both outcomes. Similar patterns were observed over the range of parameter values assessed and all PS implementation methods. Conclusions A single, generic-outcome model performed well compared with separate outcome-specific models in most scenarios considered. The results emphasize the benefit of using prior knowledge to identify covariates that affect the outcome when constructing PS models and support the potential to use a single, generic-outcome PS model when multiple outcomes are being examined. PMID:23070806

  12. SnIPRE: selection inference using a Poisson random effects model.

    PubMed

    Eilertson, Kirsten E; Booth, James G; Bustamante, Carlos D

    2012-01-01

    We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data. PMID:23236270

  13. SnIPRE: Selection Inference Using a Poisson Random Effects Model

    PubMed Central

    Eilertson, Kirsten E.; Booth, James G.; Bustamante, Carlos D.

    2012-01-01

    We present an approach for identifying genes under natural selection using polymorphism and divergence data from synonymous and non-synonymous sites within genes. A generalized linear mixed model is used to model the genome-wide variability among categories of mutations and estimate its functional consequence. We demonstrate how the model's estimated fixed and random effects can be used to identify genes under selection. The parameter estimates from our generalized linear model can be transformed to yield population genetic parameter estimates for quantities including the average selection coefficient for new mutations at a locus, the synonymous and non-synynomous mutation rates, and species divergence times. Furthermore, our approach incorporates stochastic variation due to the evolutionary process and can be fit using standard statistical software. The model is fit in both the empirical Bayes and Bayesian settings using the lme4 package in R, and Markov chain Monte Carlo methods in WinBUGS. Using simulated data we compare our method to existing approaches for detecting genes under selection: the McDonald-Kreitman test, and two versions of the Poisson random field based method MKprf. Overall, we find our method universally outperforms existing methods for detecting genes subject to selection using polymorphism and divergence data. PMID:23236270

  14. Model selection and change detection for a time-varying mean in process monitoring

    NASA Astrophysics Data System (ADS)

    Burr, Tom; Hamada, Michael S.; Ticknor, Larry; Weaver, Brian

    2014-07-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation is an old topic; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of alarm threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual=data-prediction. This paper briefly reviews alarm threshold estimation, introduces model selection options, and considers several assumptions regarding the data-generating mechanism for PM residuals. Four PM examples from nuclear safeguards are included. One example involves frequent by-batch material balance closures where a dissolution vessel has time-varying efficiency, leading to time-varying material holdup. Another example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals. Our main focus is model selection to select a defensible model for normal behavior with a time-varying mean in a PM residual stream. We use approximate Bayesian computation to perform the model selection and parameter estimation for normal behavior. We then describe a simple lag-one-differencing option similar to that used to monitor non-stationary times series to monitor for off-normal behavior.

  15. Quasi-hidden Markov model and its applications in cluster analysis of earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Wu, Zhengxiao

    2011-12-01

    We identify a broad class of models, quasi-hidden Markov models (QHMMs), which include hidden Markov models (HMMs) as special cases. Applying the QHMM framework, this paper studies how an earthquake cluster propagates statistically. Two QHMMs are used to describe two different propagating patterns. The "mother-and-kids" model regards the first shock in an earthquake cluster as "mother" and the aftershocks as "kids," which occur in a neighborhood centered by the mother. In the "domino" model, however, the next aftershock strikes in a neighborhood centered by the most recent previous earthquake in the cluster, and therefore aftershocks act like dominoes. As the likelihood of QHMMs can be efficiently computed via the forward algorithm, likelihood-based model selection criteria can be calculated to compare these two models. We demonstrate this procedure using data from the central New Zealand region. For this data set, the mother-and-kids model yields a higher likelihood as well as smaller AIC and BIC. In other words, in the aforementioned area the next aftershock is more likely to occur near the first shock than near the latest aftershock in the cluster. This provides an answer, though not entirely satisfactorily, to the question "where will the next aftershock be?". The asymptotic consistency of the model selection procedure in the paper is duly established, namely that, when the number of the observations goes to infinity, with probability one the procedure picks out the model with the smaller deviation from the true model (in terms of relative entropy rate).

  16. A signal integration model of thymic selection and natural regulatory T cell commitment.

    PubMed

    Khailaie, Sahamoddin; Robert, Philippe A; Toker, Aras; Huehn, Jochen; Meyer-Hermann, Michael

    2014-12-15

    The extent of TCR self-reactivity is the basis for selection of a functional and self-tolerant T cell repertoire and is quantified by repeated engagement of TCRs with a diverse pool of self-peptides complexed with self-MHC molecules. The strength of a TCR signal depends on the binding properties of a TCR to the peptide and the MHC, but it is not clear how the specificity to both components drives fate decisions. In this study, we propose a TCR signal-integration model of thymic selection that describes how thymocytes decide among distinct fates, not only based on a single TCR-ligand interaction, but taking into account the TCR stimulation history. These fates are separated based on sustained accumulated signals for positive selection and transient peak signals for negative selection. This spans up the cells into a two-dimensional space where they are either neglected, positively selected, negatively selected, or selected as natural regulatory T cells (nTregs). We show that the dynamics of the integrated signal can serve as a successful basis for extracting specificity of thymocytes to MHC and detecting the existence of cognate self-peptide-MHC. It allows to select a self-MHC-biased and self-peptide-tolerant T cell repertoire. Furthermore, nTregs in the model are enriched with MHC-specific TCRs. This allows nTregs to be more sensitive to activation and more cross-reactive than conventional T cells. This study provides a mechanistic model showing that time integration of TCR-mediated signals, as opposed to single-cell interaction events, is needed to gain a full view on the properties emerging from thymic selection. PMID:25392533

  17. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  18. Selection and mutation in X-linked recessive diseases epidemiological model.

    PubMed

    Verrilli, Francesca; Kebriaei, Hamed; Glielmo, Luigi; Corless, Martin; Del Vecchio, Carmen

    2015-08-01

    To describe the epidemiology of X-linked recessive diseases we developed a discrete time, structured, non linear mathematical model. The model allows for de novo mutations (i.e. affected sibling born to unaffected parents) and selection (i.e., distinct fitness rates depending on individual's health conditions). Applying Lyapunov direct method we found the domain of attraction of model's equilibrium point and studied the convergence properties of the degenerate equilibrium where only affected individuals survive. PMID:26737169

  19. Prattville intake, Lake Almanor, California, hydraulic model study on selective withdrawal modifications. Final report

    SciTech Connect

    Vermeyen, T.

    1995-07-01

    Bureau of Reclamation conducted this hydraulic model study to provide Pacific Gas and Electric Company with an evaluation of several selective withdrawal structures that are being considered to reduce intake flow temperatures through the Prattville Intake at Lake Almanor, California. Release temperature control using selective withdrawal structures is being considered in an effort to improve the cold-water fishery in the North Fork of the Feather River.

  20. Using an immune system model to explore mate selection in genetic algorithms.

    SciTech Connect

    Huang, C. F.

    2003-01-01

    In the setting of multimodal function optimization, engineering and machine learning, identifying multiple peaks and maintaining subpopulations of the search space are two central themes when Genetic Algorithms (GAs) are employed. In this paper, an immune system model is adopted to develop a framework for exploring the role of mate selection in GAs with respect to these two issues. The experimental results reported in the paper will shed more light into how mate selection schemes compare to traditional selection schemes. In particular, we show that dissimilar mating is beneficial in identifying multiple peaks, yet harmful in maintaining subpopulations of the search space.

  1. Understanding the link between sexual selection, sexual conflict and aging using crickets as a model.

    PubMed

    Archer, C Ruth; Hunt, John

    2015-11-01

    Aging evolved because the strength of natural selection declines over the lifetime of most organisms. Weak natural selection late in life allows the accumulation of deleterious mutations and may favor alleles that have positive effects on fitness early in life, but costly pleiotropic effects expressed later on. While this decline in natural selection is central to longstanding evolutionary explanations for aging, a role for sexual selection and sexual conflict in the evolution of lifespan and aging has only been identified recently. Testing how sexual selection and sexual conflict affect lifespan and aging is challenging as it requires quantifying male age-dependent reproductive success. This is difficult in the invertebrate model organisms traditionally used in aging research. Research using crickets (Orthoptera: Gryllidae), where reproductive investment can be easily measured in both sexes, has offered exciting and novel insights into how sexual selection and sexual conflict affect the evolution of aging, both in the laboratory and in the wild. Here we discuss how sexual selection and sexual conflict can be integrated alongside evolutionary and mechanistic theories of aging using crickets as a model. We then highlight the potential for research using crickets to further advance our understanding of lifespan and aging. PMID:26150061

  2. MOMENT-BASED METHOD FOR RANDOM EFFECTS SELECTION IN LINEAR MIXED MODELS

    PubMed Central

    Ahn, Mihye; Lu, Wenbin

    2012-01-01

    The selection of random effects in linear mixed models is an important yet challenging problem in practice. We propose a robust and unified framework for automatically selecting random effects and estimating covariance components in linear mixed models. A moment-based loss function is first constructed for estimating the covariance matrix of random effects. Two types of shrinkage penalties, a hard thresholding operator and a new sandwich-type soft-thresholding penalty, are then imposed for sparse estimation and random effects selection. Compared with existing approaches, the new procedure does not require any distributional assumption on the random effects and error terms. We establish the asymptotic properties of the resulting estimator in terms of its consistency in both random effects selection and variance component estimation. Optimization strategies are suggested to tackle the computational challenges involved in estimating the sparse variance-covariance matrix. Furthermore, we extend the procedure to incorporate the selection of fixed effects as well. Numerical results show promising performance of the new approach in selecting both random and fixed effects and, consequently, improving the efficiency of estimating model parameters. Finally, we apply the approach to a data set from the Amsterdam Growth and Health study. PMID:23105913

  3. Probing cosmology with weak lensing selected clusters. II. Dark energy and f(R) gravity models

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Hamana, Takashi; Yoshida, Naoki

    2016-02-01

    Ongoing and future wide-field galaxy surveys can be used to locate a number of clusters of galaxies with cosmic shear measurement alone. We study constraints on cosmological models using statistics of weak lensing selected galaxy clusters. We extend our previous theoretical framework to model the statistical properties of clusters in variants of cosmological models as well as in the standard ΛCDM model. Weak lensing selection of clusters does not rely on conventional assumptions such as the relation between luminosity and mass and/or hydrostatic equilibrium, but a number of observational effects compromise robust identification. We use a large set of realistic mock weak lensing catalogs as well as analytic models to perform a Fisher analysis and make a forecast for constraining two competing cosmological models, the wCDM model and f(R) model proposed by Hu and Sawicki (2007, Phys. Rev. D, 76, 064004), with our lensing statistics. We show that weak lensing selected clusters are excellent probes of cosmology when combined with cosmic shear power spectrum even in the presence of galaxy shape noise and masked regions. With the information from weak lensing selected clusters, the precision of cosmological parameter estimates can be improved by a factor of ˜1.6 and ˜8 for the wCDM model and f(R) model, respectively. The Hyper Suprime-Cam survey with sky coverage of 1250 degrees squared can constrain the equation of state of dark energy w0 with a level of Δw0 ˜ 0.1. It can also constrain the additional scalar degree of freedom in the f(R) model with a level of |fR0| ˜ 5 × 10-6, when constraints from cosmic microwave background measurements are incorporated. Future weak lensing surveys with sky coverage of 20000 degrees squared will place tighter constraints on w0 and |fR0| even without cosmic microwave background measurements.

  4. Use of Thermodynamic Modeling for Selection of Electrolyte for Electrorefining of Magnesium from Aluminum Alloy Melts

    NASA Astrophysics Data System (ADS)

    Gesing, Adam J.; Das, Subodh K.

    2016-06-01

    With United States Department of Energy Advanced Research Project Agency funding, experimental proof-of-concept was demonstrated for RE-12TM electrorefining process of extraction of desired amount of Mg from recycled scrap secondary Al molten alloys. The key enabling technology for this process was the selection of the suitable electrolyte composition and operating temperature. The selection was made using the FactSage thermodynamic modeling software and the light metal, molten salt, and oxide thermodynamic databases. Modeling allowed prediction of the chemical equilibria, impurity contents in both anode and cathode products, and in the electrolyte. FactSage also provided data on the physical properties of the electrolyte and the molten metal phases including electrical conductivity and density of the molten phases. Further modeling permitted selection of electrode and cell construction materials chemically compatible with the combination of molten metals and the electrolyte.

  5. First Principles Molecular Modeling of Sensing Material Selection for Hybrid Biomimetic Nanosensors

    NASA Astrophysics Data System (ADS)

    Blanco, Mario; McAlpine, Michael C.; Heath, James R.

    Hybrid biomimetic nanosensors use selective polymeric and biological materials that integrate flexible recognition moieties with nanometer size transducers. These sensors have the potential to offer the building blocks for a universal sensing platform. Their vast range of chemistries and high conformational flexibility present both a problem and an opportunity. Nonetheless, it has been shown that oligopeptide aptamers from sequenced genes can be robust substrates for the selective recognition of specific chemical species. Here we present first principles molecular modeling approaches tailored to peptide sequences suitable for the selective discrimination of small molecules on nanowire arrays. The modeling strategy is fully atomistic. The excellent performance of these sensors, their potential biocompatibility combined with advanced mechanistic modeling studies, could potentially lead to applications such as: unobtrusive implantable medical sensors for disease diagnostics, light weight multi-purpose sensing devices for aerospace applications, ubiquitous environmental monitoring devices in urban and rural areas, and inexpensive smart packaging materials for active in-situ food safety labeling.

  6. The Pattern of Neutral Molecular Variation under the Background Selection Model

    PubMed Central

    Charlesworth, D.; Charlesworth, B.; Morgan, M. T.

    1995-01-01

    Stochastic simulations of the infinite sites model were used to study the behavior of genetic diversity at a neutral locus in a genomic region without recombination, but subject to selection against deleterious alleles maintained by recurrent mutation (background selection). In large populations, the effect of background selection on the number of segregating sites approaches the effct on nucleotide site diversity, i.e., the reduction in genetic variability caused by background selection resembles that caused by a simple reduction in effective population size. We examined, by coalescence-based methods, the power of several tests for the departure from neutral expectation of the frequency spectra of alleles in samples from randomly mating populations (TAJIMA's, FU and LI's, and WATTERSON's tests). All of the tests have low power unless the selection against mutant alleles is extremely weak. In Drosophila, significant TAJIMA's tests are usually not obtained with empirical data sets from loci in genomic regions with restricted recombination frequencies and that exhibit low genetic diversity. This is consistent with the operation of background selection as opposed to selective sweeps. It remains to be decided whether background selection is sufficient to explain the observed extent of reduction in diversity in regions of restricted recombination. PMID:8601499

  7. Optimal Selection of Predictor Variables in Statistical Downscaling Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Goly, A.; Teegavarapu, R. S. V.

    2014-12-01

    Statistical downscaling models developed for precipitation rely heavily on predictors chosen and on accurate relationships between regional scale predictand and GCM-scale predictor for providing future precipitation projections at different spatial and temporal scales. This study provides two new screening methods for selection of predictor variables for use in downscaling methods based on predictand-predictors relationships. Methods to characterize predictand-predictors relationships via rigid and flexible functional relationships using mixed integer nonlinear programming (MINLP) model with binary variables and artificial neural network (ANN) models respectively are developed and evaluated in this study. In addition to these two methods, a stepwise regression (SWR) and two models that do not use any pre-screening of variables are also evaluated. A two-step process is used to downscale precipitation data with optimal selection of predictors and using them in a statistical downscaling model based on support vector machine (SVM) approach. Experiments with the proposed two new methods and three additional methods based on correlation between predictors and predictand and the other based on principal component analysis are evaluated in this study. Results suggest that optimal selection of variables using MINLP albeit with linear relationship and ANN method provided improved performance and error measures compared to two other models that did not use these methods for screening the variables. Of all the three screening methods tested in this study, SWR method selected the least number of variables and also ranked lowest based on several performance measures.

  8. The quantitative genetics of indirect genetic effects: a selective review of modelling issues.

    PubMed

    Bijma, P

    2014-01-01

    Indirect genetic effects (IGE) occur when the genotype of an individual affects the phenotypic trait value of another conspecific individual. IGEs can have profound effects on both the magnitude and the direction of response to selection. Models of inheritance and response to selection in traits subject to IGEs have been developed within two frameworks; a trait-based framework in which IGEs are specified as a direct consequence of individual trait values, and a variance-component framework in which phenotypic variance is decomposed into a direct and an indirect additive genetic component. This work is a selective review of the quantitative genetics of traits affected by IGEs, with a focus on modelling, estimation and interpretation issues. It includes a discussion on variance-component vs trait-based models of IGEs, a review of issues related to the estimation of IGEs from field data, including the estimation of the interaction coefficient Ψ (psi), and a discussion on the relevance of IGEs for response to selection in cases where the strength of interaction varies among pairs of individuals. An investigation of the trait-based model shows that the interaction coefficient Ψ may deviate considerably from the corresponding regression coefficient when feedback occurs. The increasing research effort devoted to IGEs suggests that they are a widespread phenomenon, probably particularly in natural populations and plants. Further work in this field should considerably broaden our understanding of the quantitative genetics of inheritance and response to selection in relation to the social organisation of populations. PMID:23512010

  9. Androgen receptor polyglutamine repeat number: models of selection and disease susceptibility

    PubMed Central

    Ryan, Calen P; Crespi, Bernard J

    2013-01-01

    Variation in polyglutamine repeat number in the androgen receptor (AR CAGn) is negatively correlated with the transcription of androgen-responsive genes and is associated with susceptibility to an extensive list of human disease. Only a small portion of the heritability for many of these diseases is explained by conventional SNP-based genome-wide association studies, and the forces shaping AR CAGn among humans remains largely unexplored. Here, we propose evolutionary models for understanding selection at the AR CAG locus, namely balancing selection, sexual conflict, accumulation-selection, and antagonistic pleiotropy. We evaluate these models by examining AR CAGn-linked susceptibility to eight extensively studied diseases representing the diverse physiological roles of androgens, and consider the costs of these diseases by their frequency and fitness effects. Five diseases could contribute to the distribution of AR CAGn observed among contemporary human populations. With support for disease susceptibilities associated with long and short AR CAGn, balancing selection provides a useful model for studying selection at this locus. Gender-specific differences AR CAGn health effects also support this locus as a candidate for sexual conflict over repeat number. Accompanied by the accumulation of AR CAGn in humans, these models help explain the distribution of repeat number in contemporary human populations. PMID:23467468

  10. Androgen receptor polyglutamine repeat number: models of selection and disease susceptibility.

    PubMed

    Ryan, Calen P; Crespi, Bernard J

    2013-02-01

    Variation in polyglutamine repeat number in the androgen receptor (AR CAGn) is negatively correlated with the transcription of androgen-responsive genes and is associated with susceptibility to an extensive list of human disease. Only a small portion of the heritability for many of these diseases is explained by conventional SNP-based genome-wide association studies, and the forces shaping AR CAGn among humans remains largely unexplored. Here, we propose evolutionary models for understanding selection at the AR CAG locus, namely balancing selection, sexual conflict, accumulation-selection, and antagonistic pleiotropy. We evaluate these models by examining AR CAGn-linked susceptibility to eight extensively studied diseases representing the diverse physiological roles of androgens, and consider the costs of these diseases by their frequency and fitness effects. Five diseases could contribute to the distribution of AR CAGn observed among contemporary human populations. With support for disease susceptibilities associated with long and short AR CAGn, balancing selection provides a useful model for studying selection at this locus. Gender-specific differences AR CAGn health effects also support this locus as a candidate for sexual conflict over repeat number. Accompanied by the accumulation of AR CAGn in humans, these models help explain the distribution of repeat number in contemporary human populations. PMID:23467468

  11. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems

    PubMed Central

    Toni, Tina; Welch, David; Strelkowa, Natalja; Ipsen, Andreas; Stumpf, Michael P.H.

    2008-01-01

    Approximate Bayesian computation (ABC) methods can be used to evaluate posterior distributions without having to calculate likelihoods. In this paper, we discuss and apply an ABC method based on sequential Monte Carlo (SMC) to estimate parameters of dynamical models. We show that ABC SMC provides information about the inferability of parameters and model sensitivity to changes in parameters, and tends to perform better than other ABC approaches. The algorithm is applied to several well-known biological systems, for which parameters and their credible intervals are inferred. Moreover, we develop ABC SMC as a tool for model selection; given a range of different mathematical descriptions, ABC SMC is able to choose the best model using the standard Bayesian model selection apparatus. PMID:19205079

  12. Hierarchical Classes Models for Three-Way Three-Mode Binary Data: Interrelations and Model Selection

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven

    2005-01-01

    Several hierarchical classes models can be considered for the modeling of three-way three-mode binary data, including the INDCLAS model (Leenen, Van Mechelen, De Boeck, and Rosenberg, 1999), the Tucker3-HICLAS model (Ceulemans,VanMechelen, and Leenen, 2003), the Tucker2-HICLAS model (Ceulemans and Van Mechelen, 2004), and the Tucker1-HICLAS model…

  13. Evaluating experimental design for soil-plant model selection using a Bootstrap Filter and Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.

    2013-12-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all

  14. Journal selection decisions: a biomedical library operations research model. I. The framework.

    PubMed Central

    Kraft, D H; Polacsek, R A; Soergel, L; Burns, K; Klair, A

    1976-01-01

    The problem of deciding which journal titles to select for acquisition in a biomedical library is modeled. The approach taken is based on cost/benefit ratios. Measures of journal worth, methods of data collection, and journal cost data are considered. The emphasis is on the development of a practical process for selecting journal titles, based on the objectivity and rationality of the model; and on the collection of the approprate data and library statistics in a reasonable manner. The implications of this process towards an overall management information system (MIS) for biomedical serials handling are discussed. PMID:820391

  15. Using the Animal Model to Accelerate Response to Selection in a Self-Pollinating Crop

    PubMed Central

    Cowling, Wallace A.; Stefanova, Katia T.; Beeck, Cameron P.; Nelson, Matthew N.; Hargreaves, Bonnie L. W.; Sass, Olaf; Gilmour, Arthur R.; Siddique, Kadambot H. M.

    2015-01-01

    We used the animal model in S0 (F1) recurrent selection in a self-pollinating crop including, for the first time, phenotypic and relationship records from self progeny, in addition to cross progeny, in the pedigree. We tested the model in Pisum sativum, the autogamous annual species used by Mendel to demonstrate the particulate nature of inheritance. Resistance to ascochyta blight (Didymella pinodes complex) in segregating S0 cross progeny was assessed by best linear unbiased prediction over two cycles of selection. Genotypic concurrence across cycles was provided by pure-line ancestors. From cycle 1, 102/959 S0 plants were selected, and their S1 self progeny were intercrossed and selfed to produce 430 S0 and 575 S2 individuals that were evaluated in cycle 2. The analysis was improved by including all genetic relationships (with crossing and selfing in the pedigree), additive and nonadditive genetic covariances between cycles, fixed effects (cycles and spatial linear trends), and other random effects. Narrow-sense heritability for ascochyta blight resistance was 0.305 and 0.352 in cycles 1 and 2, respectively, calculated from variance components in the full model. The fitted correlation of predicted breeding values across cycles was 0.82. Average accuracy of predicted breeding values was 0.851 for S2 progeny of S1 parent plants and 0.805 for S0 progeny tested in cycle 2, and 0.878 for S1 parent plants for which no records were available. The forecasted response to selection was 11.2% in the next cycle with 20% S0 selection proportion. This is the first application of the animal model to cyclic selection in heterozygous populations of selfing plants. The method can be used in genomic selection, and for traits measured on S0-derived bulks such as grain yield. PMID:25943522

  16. Using the Animal Model to Accelerate Response to Selection in a Self-Pollinating Crop.

    PubMed

    Cowling, Wallace A; Stefanova, Katia T; Beeck, Cameron P; Nelson, Matthew N; Hargreaves, Bonnie L W; Sass, Olaf; Gilmour, Arthur R; Siddique, Kadambot H M

    2015-07-01

    We used the animal model in S0 (F1) recurrent selection in a self-pollinating crop including, for the first time, phenotypic and relationship records from self progeny, in addition to cross progeny, in the pedigree. We tested the model in Pisum sativum, the autogamous annual species used by Mendel to demonstrate the particulate nature of inheritance. Resistance to ascochyta blight (Didymella pinodes complex) in segregating S0 cross progeny was assessed by best linear unbiased prediction over two cycles of selection. Genotypic concurrence across cycles was provided by pure-line ancestors. From cycle 1, 102/959 S0 plants were selected, and their S1 self progeny were intercrossed and selfed to produce 430 S0 and 575 S2 individuals that were evaluated in cycle 2. The analysis was improved by including all genetic relationships (with crossing and selfing in the pedigree), additive and nonadditive genetic covariances between cycles, fixed effects (cycles and spatial linear trends), and other random effects. Narrow-sense heritability for ascochyta blight resistance was 0.305 and 0.352 in cycles 1 and 2, respectively, calculated from variance components in the full model. The fitted correlation of predicted breeding values across cycles was 0.82. Average accuracy of predicted breeding values was 0.851 for S2 progeny of S1 parent plants and 0.805 for S0 progeny tested in cycle 2, and 0.878 for S1 parent plants for which no records were available. The forecasted response to selection was 11.2% in the next cycle with 20% S0 selection proportion. This is the first application of the animal model to cyclic selection in heterozygous populations of selfing plants. The method can be used in genomic selection, and for traits measured on S0-derived bulks such as grain yield. PMID:25943522

  17. Crossing statistic: Bayesian interpretation, model selection and resolving dark energy parametrization problem

    SciTech Connect

    Shafieloo, Arman

    2012-05-01

    By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties.

  18. Evaluation of Intradural Stimulation Efficiency and Selectivity in a Computational Model of Spinal Cord Stimulation

    PubMed Central

    Howell, Bryan; Lad, Shivanand P.; Grill, Warren M.

    2014-01-01

    Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS, which, in turn

  19. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  20. Genomic Response to Selection for Predatory Behavior in a Mammalian Model of Adaptive Radiation.

    PubMed

    Konczal, Mateusz; Koteja, Paweł; Orlowska-Feuer, Patrycja; Radwan, Jacek; Sadowska, Edyta T; Babik, Wiesław

    2016-09-01

    If genetic architectures of various quantitative traits are similar, as studies on model organisms suggest, comparable selection pressures should produce similar molecular patterns for various traits. To test this prediction, we used a laboratory model of vertebrate adaptive radiation to investigate the genetic basis of the response to selection for predatory behavior and compare it with evolution of aerobic capacity reported in an earlier work. After 13 generations of selection, the proportion of bank voles (Myodes [=Clethrionomys] glareolus) showing predatory behavior was five times higher in selected lines than in controls. We analyzed the hippocampus and liver transcriptomes and found repeatable changes in allele frequencies and gene expression. Genes with the largest differences between predatory and control lines are associated with hunger, aggression, biological rhythms, and functioning of the nervous system. Evolution of predatory behavior could be meaningfully compared with evolution of high aerobic capacity, because the experiments and analyses were performed in the same methodological framework. The number of genes that changed expression was much smaller in predatory lines, and allele frequencies changed repeatably in predatory but not in aerobic lines. This suggests that more variants of smaller effects underlie variation in aerobic performance, whereas fewer variants of larger effects underlie variation in predatory behavior. Our results thus contradict the view that comparable selection pressures for different quantitative traits produce similar molecular patterns. Therefore, to gain knowledge about molecular-level response to selection for complex traits, we need to investigate not only multiple replicate populations but also multiple quantitative traits. PMID:27401229

  1. Discrete choice modeling of shovelnose sturgeon habitat selection in the Lower Missouri River

    USGS Publications Warehouse

    Bonnot, T.W.; Wildhaber, M.L.; Millspaugh, J.J.; DeLonay, A.J.; Jacobson, R.B.; Bryan, J.L.

    2011-01-01

    Substantive changes to physical habitat in the Lower Missouri River, resulting from intensive management, have been implicated in the decline of pallid (Scaphirhynchus albus) and shovelnose (S. platorynchus) sturgeon. To aid in habitat rehabilitation efforts, we evaluated habitat selection of gravid, female shovelnose sturgeon during the spawning season in two sections (lower and upper) of the Lower Missouri River in 2005 and in the upper section in 2007. We fit discrete choice models within an information theoretic framework to identify selection of means and variability in three components of physical habitat. Characterizing habitat within divisions around fish better explained selection than habitat values at the fish locations. In general, female shovelnose sturgeon were negatively associated with mean velocity between them and the bank and positively associated with variability in surrounding depths. For example, in the upper section in 2005, a 0.5ms-1 decrease in velocity within 10m in the bank direction increased the relative probability of selection 70%. In the upper section fish also selected sites with surrounding structure in depth (e.g., change in relief). Differences in models between sections and years, which are reinforced by validation rates, suggest that changes in habitat due to geomorphology, hydrology, and their interactions over time need to be addressed when evaluating habitat selection. Because of the importance of variability in surrounding depths, these results support an emphasis on restoring channel complexity as an objective of habitat restoration for shovelnose sturgeon in the Lower Missouri River. ?? 2011 Blackwell Verlag, Berlin.

  2. Discrete choice modeling of shovelnose sturgeon habitat selection in the Lower Missouri River

    USGS Publications Warehouse

    Bonnot, T.W.; Wildhaber, M.L.; Millspaugh, J.J.; DeLonay, A.J.; Jacobson, R.B.; Bryan, J.L.

    2011-01-01

    Substantive changes to physical habitat in the Lower Missouri River, resulting from intensive management, have been implicated in the decline of pallid (Scaphirhynchus albus) and shovelnose (S. platorynchus) sturgeon. To aid in habitat rehabilitation efforts, we evaluated habitat selection of gravid, female shovelnose sturgeon during the spawning season in two sections (lower and upper) of the Lower Missouri River in 2005 and in the upper section in 2007. We fit discrete choice models within an information theoretic framework to identify selection of means and variability in three components of physical habitat. Characterizing habitat within divisions around fish better explained selection than habitat values at the fish locations. In general, female shovelnose sturgeon were negatively associated with mean velocity between them and the bank and positively associated with variability in surrounding depths. For example, in the upper section in 2005, a 0.5 m s-1 decrease in velocity within 10 m in the bank direction increased the relative probability of selection 70%. In the upper section fish also selected sites with surrounding structure in depth (e.g., change in relief). Differences in models between sections and years, which are reinforced by validation rates, suggest that changes in habitat due to geomorphology, hydrology, and their interactions over time need to be addressed when evaluating habitat selection. Because of the importance of variability in surrounding depths, these results support an emphasis on restoring channel complexity as an objective of habitat restoration for shovelnose sturgeon in the Lower Missouri River.

  3. Uncertainty in Propensity Score Estimation: Bayesian Methods for Variable Selection and Model Averaged Causal Effects

    PubMed Central

    Zigler, Corwin Matthew; Dominici, Francesca

    2014-01-01

    Causal inference with observational data frequently relies on the notion of the propensity score (PS) to adjust treatment comparisons for observed confounding factors. As decisions in the era of “big data” are increasingly reliant on large and complex collections of digital data, researchers are frequently confronted with decisions regarding which of a high-dimensional covariate set to include in the PS model in order to satisfy the assumptions necessary for estimating average causal effects. Typically, simple or ad-hoc methods are employed to arrive at a single PS model, without acknowledging the uncertainty associated with the model selection. We propose three Bayesian methods for PS variable selection and model averaging that 1) select relevant variables from a set of candidate variables to include in the PS model and 2) estimate causal treatment effects as weighted averages of estimates under different PS models. The associated weight for each PS model reflects the data-driven support for that model’s ability to adjust for the necessary variables. We illustrate features of our proposed approaches with a simulation study, and ultimately use our methods to compare the effectiveness of surgical vs. nonsurgical treatment for brain tumors among 2,606 Medicare beneficiaries. Supplementary materials are available online. PMID:24696528

  4. Adaptive global training set selection for spectral estimation of printed inks using reflectance modeling.

    PubMed

    Eckhard, Timo; Valero, Eva M; Hernández-Andrés, Javier; Schnitzlein, Markus

    2014-02-01

    The performance of learning-based spectral estimation is greatly influenced by the set of training samples selected to create the reconstruction model. Training sample selection schemes can be categorized into global and local approaches. Most of the previously proposed global training schemes aim to reduce the number of training samples, or a selection of representative samples, to maintain the generality of the training dataset. This work relates to printed ink reflectance estimation for quality assessment in in-line print inspection. We propose what we believe is a novel global training scheme that models a large population of realistic printable ink reflectances. Based on this dataset, we used a recursive top-down algorithm to reject clusters of training samples that do not enhance the performance of a linear least-square regression (pseudoinverse-based estimation) process. A set of experiments with real camera response data of a 12-channel multispectral camera system illustrate the advantages of this selection scheme over some other state-of-the-art algorithms. For our data, our method of global training sample selection outperforms other methods in terms of estimation quality and, more importantly, can quickly handle large datasets. Furthermore, we show that reflectance modeling is a reasonable, convenient tool to generate large training sets for print inspection applications. PMID:24514188

  5. The use of vector bootstrapping to improve variable selection precision in Lasso models.

    PubMed

    Laurin, Charles; Boomsma, Dorret; Lubke, Gitta

    2016-08-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections. Nesting cross-validation within bootstrapping could provide further improvements in precision, but this has not been investigated systematically. We performed simulation studies of Lasso variable selection precision (VSP) with and without nesting cross-validation within bootstrapping. Data were simulated to represent genomic data under a polygenic model as well as under a model with effect sizes representative of typical GWAS results. We compared these approaches to each other as well as to software defaults for the Lasso. Nested cross-validation had the most precise variable selection at small effect sizes. At larger effect sizes, there was no advantage to nesting. We illustrated the nested approach with empirical data comprising SNPs and SNP-SNP interactions from the most significant SNPs in a GWAS of borderline personality symptoms. In the empirical example, we found that the default Lasso selected low-reliability SNPs and interactions which were excluded by bootstrapping. PMID:27248122

  6. A Model-Based Approach for Identifying Signatures of Ancient Balancing Selection in Genetic Data

    PubMed Central

    DeGiorgio, Michael; Lohmueller, Kirk E.; Nielsen, Rasmus

    2014-01-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates. PMID:25144706

  7. Selection of sugar cane full-sib families using mixed models and ISSR markers.

    PubMed

    Almeida, L M; Viana, A P; Gonçalves, G M; Entringer, G C

    2014-01-01

    In 2006, an experiment examining families belonging to the first selection stage of the Sugar Cane Breeding Program of Universidade Federal Rural do Rio de Janeiro/Rede Interuniversitária para o Desenvolvimento do Setor Sucroalcooleiro was conducted. Families and plants within families were evaluated to select superior plants for subsequent stages of the breeding program. The experiment was arranged in a randomized block design, in which progenies were grouped into 4 sets, each with 4 replicates and 100 seedlings per plot. The following traits were evaluated: average stem diameter, total plot weight, number of stems, Brix of the lower stem, and Brix of the upper stem. The study of families used the restricted maximum likelihood/best linear unbiased procedure mixed models. After selection, families were genotyped via inter-simple sequence repeat to assess the genetic distance of genotypes. This approach was found to be efficient for selecting new genotypes. PMID:25501142

  8. The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site

    SciTech Connect

    Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.; Li, Jun; Wei, Lingli; Yamamoto, Hajime; Gasda, Sarah E.; Hosseini, Seyyed; Nicot, Jean-Philippe; Birkholzer, Jens

    2015-05-23

    Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e, MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.

  9. 'Chain pooling' model selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.

  10. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    SciTech Connect

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.

  11. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    DOE PAGESBeta

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less

  12. Bayesian model averaging to explore the worth of data for soil-plant model selection and prediction

    NASA Astrophysics Data System (ADS)

    Wöhling, Thomas; Schöniger, Anneli; Gayler, Sebastian; Nowak, Wolfgang

    2015-04-01

    A Bayesian model averaging (BMA) framework is presented to evaluate the worth of different observation types and experimental design options for (1) more confidence in model selection and (2) for increased predictive reliability. These two modeling tasks are handled separately because model selection aims at identifying the most appropriate model with respect to a given calibration data set, while predictive reliability aims at reducing uncertainty in model predictions through constraining the plausible range of both models and model parameters. For that purpose, we pursue an optimal design of measurement framework that is based on BMA and that considers uncertainty in parameters, measurements, and model structures. We apply this framework to select between four crop models (the vegetation components of CERES, SUCROS, GECROS, and SPASS), which are coupled to identical routines for simulating soil carbon and nitrogen turnover, soil heat and nitrogen transport, and soil water movement. An ensemble of parameter realizations was generated for each model using Monte-Carlo simulation. We assess each model's plausibility by determining its posterior weight, which signifies the probability to have generated a given experimental data set. Several BMA analyses were conducted for different data packages with measurements of soil moisture, evapotranspiration (ETa), and leaf area index (LAI). The posterior weights resulting from the different BMA runs were compared to the weight distribution of a reference run with all data types to investigate the utility of different data packages and monitoring design options in identifying the most appropriate model in the ensemble. We found that different (combinations of) data types support different models and none of the four crop models outperforms all others under all data scenarios. The best model discrimination was observed for those data where the competing models disagree the most. The data worth for reducing prediction

  13. Application Of Decision Tree Approach To Student Selection Model- A Case Study

    NASA Astrophysics Data System (ADS)

    Harwati; Sudiya, Amby

    2016-01-01

    The main purpose of the institution is to provide quality education to the students and to improve the quality of managerial decisions. One of the ways to improve the quality of students is to arrange the selection of new students with a more selective. This research takes the case in the selection of new students at Islamic University of Indonesia, Yogyakarta, Indonesia. One of the university's selection is through filtering administrative selection based on the records of prospective students at the high school without paper testing. Currently, that kind of selection does not yet has a standard model and criteria. Selection is only done by comparing candidate application file, so the subjectivity of assessment is very possible to happen because of the lack standard criteria that can differentiate the quality of students from one another. By applying data mining techniques classification, can be built a model selection for new students which includes criteria to certain standards such as the area of origin, the status of the school, the average value and so on. These criteria are determined by using rules that appear based on the classification of the academic achievement (GPA) of the students in previous years who entered the university through the same way. The decision tree method with C4.5 algorithm is used here. The results show that students are given priority for admission is that meet the following criteria: came from the island of Java, public school, majoring in science, an average value above 75, and have at least one achievement during their study in high school.

  14. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    PubMed Central

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  15. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  16. Using distance covariance for improved variable selection with application to learning genetic risk models.

    PubMed

    Kong, Jing; Wang, Sijian; Wahba, Grace

    2015-05-10

    Variable selection is of increasing importance to address the difficulties of high dimensionality in many scientific areas. In this paper, we demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure together with the use of distance correlation. The approach makes no distributional assumptions for the variables and does not require the specification of a regression model and hence is especially attractive in variable selection given an enormous number of candidate attributes without much information about the true model with the response. The method is applied to two genetic risk problems, where issues including uncertainty of variable selection via cross validation, subgroup of hard-to-classify cases, and the application of a reject option are discussed. PMID:25640961

  17. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    NASA Astrophysics Data System (ADS)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  18. An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems

    NASA Astrophysics Data System (ADS)

    Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.

    2016-04-01

    Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).

  19. DIAGNOSTIC EVALUATION OF AIR QUALITY MODELS USING ADVANCED METHODS WITH SPECIALIZED OBSERVATIONS OF SELECTED AMBIENT SPECIES -PART II

    EPA Science Inventory

    This is Part 2 of "Diagnostic Evaluation of Air Quality Models Using Advanced Methods with Specialized Observations of Selected Ambient Species". A limited field campaign to make specialized observations of selected ambient species using advanced and innovative instrumentation f...

  20. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  1. Model reduction of process-based hydro-ecological models: a comparison between projection- and selection-based methods

    NASA Astrophysics Data System (ADS)

    Alsahaf, Ahmed; Giuliani, Matteo; Galelli, Stefano; Castelletti, Andrea

    2015-04-01

    Complex process-based hydro-ecological models are often used to describe the water quality processes in lakes, rivers and other water resources systems. However, the computational requirements typically associated to these models often prevent their use in computationally intensive applications, such as optimal planning and management. For this reason, the purpose of model reduction is to identify reduced-order models (or emulators) that can adequately replace complex hydro-ecological models in such applications. Projection-based model reduction is one of the most popular approaches used for the identification of emulators. It is based on the idea of sampling from the original model various values, or snapshots, of the state variables, and then using these snapshots in a projection scheme to find a lower-dimensional subspace that captures the majority of the variation of the original model. The model is then projected onto this subspace and solved, yielding a computationally efficient emulator. Yet, this approach may unnecessarily increase the complexity of the emulator, especially when only a few state variables of the original model are relevant with respect to the output of interest. On the other hand, selection-based model reduction uses the information contained in the snapshots to select the state variables of the original model that are relevant with respect to the emulator's output, thus allowing for model reduction. This provides a better trade-off between fidelity and model complexity, since the irrelevant and redundant state variables are excluded from the model reduction process. In this work we address these issues by presenting an exhaustive experimental comparison between two popular projection- and selection-based methods, namely Proper Orthogonal Decomposition (POD) and Dynamic Emulation Modelling (DEMo). The comparison is performed on the reduction of DYRESM-CAEDYM, a 1D hydro-ecological model used to describe the in-reservoir water quality

  2. Evolution of female multiple mating: A quantitative model of the "sexually selected sperm" hypothesis.

    PubMed

    Bocedi, Greta; Reid, Jane M

    2015-01-01

    Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with "sexy-son" models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and "sexy-son" processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405

  3. A supplier-selection model with classification and joint replenishment of inventory items

    NASA Astrophysics Data System (ADS)

    Mohammaditabar, Davood; Hassan Ghodsypour, Seyed

    2016-06-01

    Since inventory costs are closely related to suppliers, many models in the literature have selected the suppliers and also allocated orders, simultaneously. Such models usually consider either a single inventory item or multiple inventory items which have independent holding and ordering costs. However, in practice, ordering multiple items from the same supplier leads to a reduction in ordering costs. This paper presents a model in capacity-constrained supplier-selection and order-allocation problem, which considers the joint replenishment of inventory items with a direct grouping approach. In such supplier-selection problems, the following items are considered: a fixed major ordering cost to each supplier, which is independent from the items in the order; a minor ordering cost for each item ordered to each supplier; and the inventory holding and purchasing costs. To solve the developed NP-hard problem, a simulated annealing algorithm was proposed and then compared to a modified genetic algorithm of the literature. The numerical example represented that the number of groups and selected suppliers were reduced when the major ordering cost increased in comparison to other costs. There were also more savings when the number of groups was determined by the model in comparison to predetermined number of groups or no grouping scenarios.

  4. The Development of a Culturally Fair Model for the Early Identification and Selection of Gifted Children.

    ERIC Educational Resources Information Center

    Storlie, Theodore R.; And Others

    A two-stage model for early identification and selection of gifted children in kindergarden through grade 3 was successfully developed for the Walker Full-time Gifted Program in the Flint, Michigan Community Schools. Using the Nominative Group Process of interactive decision-making, project participants, school administrators, school…

  5. A Cognitive Model of Document Use during a Research Project. Study I. Document Selection.

    ERIC Educational Resources Information Center

    Wang, Peiling; Soergel, Dagobert

    1998-01-01

    Proposes a model of document selection by real users of a bibliographic retrieval system. Reports on Part I of a longitudinal study of decision making on document use by academics (25 faculty and graduate students in Agricultural Economics). Examines what components are relevant to the users' decisions and what cognitive process may have occurred…

  6. Sampling Schemes and the Selection of Log-Linear Models for Longitudinal Data.

    ERIC Educational Resources Information Center

    von Eye, Alexander; Schuster, Christof; Kreppner, Kurt

    2001-01-01

    Discusses the effects of sampling scheme selection on the admissibility of log-linear models for multinomial and product multinomial sampling schemes for prospective and retrospective sampling. Notes that in multinomial sampling, marginal frequencies are not fixed, whereas for product multinomial sampling, uni- or multidimensional frequencies are…

  7. Transverse tripolar stimulation of peripheral nerve: a modelling study of spatial selectivity.

    PubMed

    Deurloo, K E; Holsheimer, J; Boom, H B

    1998-01-01

    Various anode-cathode configurations in a nerve cuff are modelled to predict their spatial selectivity characteristics for functional nerve stimulation. A 3D volume conductor model of a monofascicular nerve is used for the computation of stimulation-induced field potentials, whereas a cable model of myelinated nerve fibre is used for the calculation of the excitation thresholds of fibres. As well as the usual configurations (monopole, bipole, longitudinal tripole, 'steering' anode), a transverse tripolar configuration (central cathode) is examined. It is found that the transverse tripole is the only configuration giving convex recruitment contours and therefore maximises activation selectivity for a small (cylindrical) bundle of fibres in the periphery of a monofascicular nerve trunk. As the electrode configuration is changed to achieve greater selectivity, the threshold current increases. Therefore threshold currents for fibre excitation with a transverse tripole are relatively high. Inverse recruitment is less extreme than for the other configurations. The influences of several geometrical parameters and model conductivities of the transverse tripole on selectivity and threshold current are analysed. In chronic implantation, when electrodes are encapsulated by a layer of fibrous tissue, threshold currents are low, whereas the shape of the recruitment contours in transverse tripolar stimulation does not change. PMID:9614751

  8. Aggressive Adolescents in Residential Care: A Selective Review of Treatment Requirements and Models

    ERIC Educational Resources Information Center

    Knorth, Erik J.; Klomp, Martin; Van den Bergh, Peter M.; Noom, Marc J.

    2007-01-01

    This article presents a selective inventory of treatment methods of aggressive behavior. Special attention is paid to types of intervention that, according to research, are frequently used in Dutch residential youth care. These methods are based on (1) principles of (cognitive) behavior management and control, (2) the social competence model, and…

  9. On Selective Harvesting of an Inshore-Offshore Fishery: A Bioeconomic Model

    ERIC Educational Resources Information Center

    Purohit, D.; Chaudhuri, K. S.

    2004-01-01

    A bioeconomic model is developed for the selective harvesting of a single species, inshore-offshore fishery, assuming that the growth of the species is governed by the Gompertz law. The dynamical system governing the fishery is studied in depth; the local and global stability of its non-trivial steady state are examined. Existence of a bionomic…

  10. Evolution of female multiple mating: A quantitative model of the “sexually selected sperm” hypothesis

    PubMed Central

    Bocedi, Greta; Reid, Jane M

    2015-01-01

    Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405

  11. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    ERIC Educational Resources Information Center

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles Cutter's 3…

  12. The Effects of Selection Strategies for Bivariate Loglinear Smoothing Models on NEAT Equating Functions

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul W.

    2010-01-01

    In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…

  13. A model for selecting assessment methods for evaluating medical students in African medical schools.

    PubMed

    Walubo, Andrew; Burch, Vanessa; Parmar, Paresh; Raidoo, Deshandra; Cassimjee, Mariam; Onia, Rudy; Ofei, Francis

    2003-09-01

    Introduction of more effective and standardized assessment methods for testing students' performance in Africa's medical institutions has been hampered by severe financial and personnel shortages. Nevertheless, some African institutions have recognized the problem and are now revising their medical curricula, and, therefore, their assessment methods. These institutions, and those yet to come, need guidance on selecting assessment methods so as to adopt models that can be sustained locally. The authors provide a model for selecting assessment methods for testing medical students' performance in African medical institutions. The model systematically evaluates factors that influence implementation of an assessment method. Six commonly used methods (the essay examinations, short-answer questions, multiple-choice questions, patient-based clinical examination, problem-based oral examination [POE], and objective structured clinical examination) are evaluated by scoring and weighting against performance, cost, suitability, and safety factors. In the model, the highest score identifies the most appropriate method. Selection of an assessment method is illustrated using two institutional models, one depicting an ideal situation in which the objective structured clinical examination was preferred, and a second depicting the typical African scenario in which the essay and short-answer-question examinations were best. The POE method received the highest score and could be recommended as the most appropriate for Africa's medical institutions, but POE assessments require changing the medical curricula to a problem-based learning approach. The authors' model is easy to understand and promotes change in the medical curriculum and method of student assessment. PMID:14507620

  14. Variable selection in subdistribution hazard frailty models with competing risks data

    PubMed Central

    Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo

    2014-01-01

    The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872

  15. MHC allele frequency distributions under parasite-driven selection: A simulation model

    PubMed Central

    2010-01-01

    Background The extreme polymorphism that is observed in major histocompatibility complex (MHC) genes, which code for proteins involved in recognition of non-self oligopeptides, is thought to result from a pressure exerted by parasites because parasite antigens are more likely to be recognized by MHC heterozygotes (heterozygote advantage) and/or by rare MHC alleles (negative frequency-dependent selection). The Ewens-Watterson test (EW) is often used to detect selection acting on MHC genes over the recent history of a population. EW is based on the expectation that allele frequencies under balancing selection should be more even than under neutrality. We used computer simulations to investigate whether this expectation holds for selection exerted by parasites on host MHC genes under conditions of heterozygote advantage and negative frequency-dependent selection acting either simultaneously or separately. Results In agreement with simple models of symmetrical overdominance, we found that heterozygote advantage acting alone in populations does, indeed, result in more even allele frequency distributions than expected under neutrality, and this is easily detectable by EW. However, under negative frequency-dependent selection, or under the joint action of negative frequency-dependent selection and heterozygote advantage, distributions of allele frequencies were less predictable: the majority of distributions were indistinguishable from neutral expectations, while the remaining runs resulted in either more even or more skewed distributions than under neutrality. Conclusions Our results indicate that, as long as negative frequency-dependent selection is an important force maintaining MHC variation, the EW test has limited utility in detecting selection acting on these genes. PMID:20979635

  16. Encapsulation of a Decision-Making Model to Optimize Supplier Selection via Structural Equation Modeling (SEM)

    NASA Astrophysics Data System (ADS)

    Sahul Hameed, Ruzanna; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Ezanee Rusli, Mohd; Yong, Lee Choon; Ghazali, Azrul; Itam, Zarina; Hakimie, Hazlinda; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper proposes a conceptual framework to compare criteria/factor that influence the supplier selection. A mixed methods approach comprising qualitative and quantitative survey will be used. The study intend to identify and define the metrics that key stakeholders at Public Works Department (PWD) believed should be used for supplier. The outcomes would foresee the possible initiatives to bring procurement in PWD to a strategic level. The results will provide a deeper understanding of drivers for supplier’s selection in the construction industry. The obtained output will benefit many parties involved in the supplier selection decision-making. The findings provides useful information and greater understanding of the perceptions that PWD executives hold regarding supplier selection and the extent to which these perceptions are consistent with findings from prior studies. The findings from this paper can be utilized as input for policy makers to outline any changes in the current procurement code of practice in order to enhance the degree of transparency and integrity in decision-making.

  17. Selective portal vein injection for the design of syngeneic models of liver malignancy.

    PubMed

    Limani, Perparim; Borgeaud, Nathalie; Linecker, Michael; Tschuor, Christoph; Kachaylo, Ekaterina; Schlegel, Andrea; Jang, Jae-Hwi; Ungethüm, Udo; Montani, Matteo; Graf, Rolf; Humar, Bostjan; Clavien, Pierre-Alain

    2016-05-01

    Liver metastases are the most frequent cause of death due to colorectal cancer (CRC). Syngeneic orthotopic animal models, based on the grafting of cancer cells or tissue in host liver, are efficient systems for studying liver tumors and their (patho)physiological environment. Here we describe selective portal vein injection as a novel tool to generate syngeneic orthotopic models of liver tumors that avoid most of the weaknesses of existing syngeneic models. By combining portal vein injection of cancer cells with the selective clamping of distal liver lobes, tumor growth is limited to specific lobes. When applied on MC-38 CRC cells and their mouse host C57BL6, selective portal vein injection leads with 100% penetrance to MRI-detectable tumors within 1 wk, followed by a steady growth until the time of death (survival ∼7 wk) in the absence of extrahepatic disease. Similar results were obtained using CT-26 cells and their syngeneic Balb/c hosts. As a proof of principle, lobe-restricted liver tumors were also generated using Hepa1-6 (C57BL6-syngeneic) and TIB-75 (Balb/c-syngeneic) hepatocellular cancer cells, demonstrating the general applicability of selective portal vein injection for the induction of malignant liver tumors. Selective portal vein injection is technically straightforward, enables liver invasion via anatomical routes, preserves liver function, and provides unaffected liver tissue. The tumor models are reproducible and highly penetrant, with survival mainly dependent on the growth of lobe-restricted liver malignancy. These models enable biological studies and preclinical testing within short periods of time. PMID:26893160

  18. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  19. Potential roles of the interaction between model V1 neurons with orientation-selective and non-selective surround inhibition in contour detection.

    PubMed

    Yang, Kai-Fu; Li, Chao-Yi; Li, Yong-Jie

    2015-01-01

    Both the neurons with orientation-selective and with non-selective surround inhibition have been observed in the primary visual cortex (V1) of primates and cats. Though the inhibition coming from the surround region (named as non-classical receptive field, nCRF) has been considered playing critical role in visual perception, the specific role of orientation-selective and non-selective inhibition in the task of contour detection is less known. To clarify above question, we first carried out computational analysis of the contour detection performance of V1 neurons with different types of surround inhibition, on the basis of which we then proposed two integrated models to evaluate their role in this specific perceptual task by combining the two types of surround inhibition with two different ways. The two models were evaluated with synthetic images and a set of challenging natural images, and the results show that both of the integrated models outperform the typical models with orientation-selective or non-selective inhibition alone. The findings of this study suggest that V1 neurons with different types of center-surround interaction work in cooperative and adaptive ways at least when extracting organized structures from cluttered natural scenes. This work is expected to inspire efficient phenomenological models for engineering applications in field of computational machine-vision. PMID:26136664

  20. Potential roles of the interaction between model V1 neurons with orientation-selective and non-selective surround inhibition in contour detection

    PubMed Central

    Yang, Kai-Fu; Li, Chao-Yi; Li, Yong-Jie

    2015-01-01

    Both the neurons with orientation-selective and with non-selective surround inhibition have been observed in the primary visual cortex (V1) of primates and cats. Though the inhibition coming from the surround region (named as non-classical receptive field, nCRF) has been considered playing critical role in visual perception, the specific role of orientation-selective and non-selective inhibition in the task of contour detection is less known. To clarify above question, we first carried out computational analysis of the contour detection performance of V1 neurons with different types of surround inhibition, on the basis of which we then proposed two integrated models to evaluate their role in this specific perceptual task by combining the two types of surround inhibition with two different ways. The two models were evaluated with synthetic images and a set of challenging natural images, and the results show that both of the integrated models outperform the typical models with orientation-selective or non-selective inhibition alone. The findings of this study suggest that V1 neurons with different types of center–surround interaction work in cooperative and adaptive ways at least when extracting organized structures from cluttered natural scenes. This work is expected to inspire efficient phenomenological models for engineering applications in field of computational machine-vision. PMID:26136664

  1. The role of model selection in representing evapotranspiration processes in climate impact assessments

    NASA Astrophysics Data System (ADS)

    Guo, Danlu; Westra, Seth; Maier, Holger R.

    2015-04-01

    Projected changes to near-surface atmospheric temperature, wind, humidity and solar radiation are expected to lead to changes in evaporative demand - and thus changes to the catchment water balance - in many catchments worldwide. To quantify likely implications on runoff, a modelling chain is commonly used in which the meteorological variables are first converted to potential evapotranspiration (PET), followed by the conversion of PET to runoff using one or more rainfall-runoff models. The role of the PET model and rainfall-runoff model selection on changes to the catchment water balance is assessed using a sensitivity analysis applied to data from five climatologically different catchments in Australia. Changes to temperature have the strongest influence on both evapotranspiration and runoff for all models and catchments, whereas the relative role of the remaining variables depends on both the catchment location and the PET and rainfall-runoff model choice. Importantly, sensitivity experiments show that 1) distributions of climate variables differ for dry/wet conditions; 2) seasonal distribution of changes to PET differs for driving variables. These findings suggest possible interactions between PET model selection and the way that evapotranspiration processes are represented within rainfall-runoff model. For a constant percentage change to PET, this effect can lead to five-fold difference in runoff changes depending on which meteorological variable is being perturbed.

  2. The effect of synaptic plasticity on orientation selectivity in a balanced model of primary visual cortex

    PubMed Central

    Gonzalo Cogno, Soledad; Mato, Germán

    2015-01-01

    Orientation selectivity is ubiquitous in the primary visual cortex (V1) of mammals. In cats and monkeys, V1 displays spatially ordered maps of orientation preference. Instead, in mice, squirrels, and rats, orientation selective neurons in V1 are not spatially organized, giving rise to a seemingly random pattern usually referred to as a salt-and-pepper layout. The fact that such different organizations can sharpen orientation tuning leads to question the structural role of the intracortical connections; specifically the influence of plasticity and the generation of functional connectivity. In this work, we analyze the effect of plasticity processes on orientation selectivity for both scenarios. We study a computational model of layer 2/3 and a reduced one-dimensional model of orientation selective neurons, both in the balanced state. We analyze two plasticity mechanisms. The first one involves spike-timing dependent plasticity (STDP), while the second one considers the reconnection of the interactions according to the preferred orientations of the neurons. We find that under certain conditions STDP can indeed improve selectivity but it works in a somehow unexpected way, that is, effectively decreasing the modulated part of the intracortical connectivity as compared to the non-modulated part of it. For the reconnection mechanism we find that increasing functional connectivity leads, in fact, to a decrease in orientation selectivity if the network is in a stable balanced state. Both counterintuitive results are a consequence of the dynamics of the balanced state. We also find that selectivity can increase due to a reconnection process if the resulting connections give rise to an unstable balanced state. We compare these findings with recent experimental results. PMID:26347615

  3. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-01-01

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection. PMID:27173301

  4. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection

    PubMed Central

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments. PMID:26640481

  5. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection.

    PubMed

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments. PMID:26640481

  6. Models Used to Select Strategic Planning Experts for High Technology Productions

    NASA Astrophysics Data System (ADS)

    Zakharova, Alexandra A.; Grigorjeva, Antonina A.; Tseplit, Anna P.; Ozgogov, Evgenij V.

    2016-04-01

    The article deals with the problems and specific aspects in organizing works of experts involved in assessment of companies that manufacture complex high-technology products. A model is presented that is intended for evaluating competences of experts in individual functional areas of expertise. Experts are selected to build a group on the basis of tables used to determine a competence level. An expert selection model based on fuzzy logic is proposed and additional requirements for the expert group composition can be taken into account, with regard to the needed quality and competence related preferences of decision-makers. A Web-based information system model is developed for the interaction between experts and decision-makers when carrying out online examinations.

  7. The permeability of reconstituted nuclear pores provides direct evidence for the selective phase model.

    PubMed

    Hülsmann, Bastian B; Labokha, Aksana A; Görlich, Dirk

    2012-08-17

    Nuclear pore complexes (NPCs) maintain a permeability barrier between the nucleus and the cytoplasm through FG-repeat-containing nucleoporins (Nups). We previously proposed a "selective phase model" in which the FG repeats interact with one another to form a sieve-like barrier that can be locally disrupted by the binding of nuclear transport receptors (NTRs), but not by inert macromolecules, allowing selective passage of NTRs and associated cargo. Here, we provide direct evidence for this model in a physiological context. By using NPCs reconstituted from Xenopus laevis egg extracts, we show that Nup98 is essential for maintaining the permeability barrier. Specifically, the multivalent cohesion between FG repeats is required, including cohesive FG repeats close to the anchorage point to the NPC scaffold. Our data exclude alternative models that are based solely on an interaction between the FG repeats and NTRs and indicate that the barrier is formed by a sieve-like FG hydrogel. PMID:22901806

  8. A Bayesian hierarchical model with spatial variable selection: the effect of weather on insurance claims

    PubMed Central

    Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth

    2013-01-01

    Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890

  9. EXONEST: Bayesian model selection applied to the detection and characterization of exoplanets via photometric variations

    SciTech Connect

    Placek, Ben; Knuth, Kevin H.; Angerhausen, Daniel E-mail: kknuth@albany.edu

    2014-11-10

    EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian model selection, a unique aspect of EXONEST is the potential capability to distinguish between reflective and thermal contributions to the light curve. A case study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the nontransiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentricity.

  10. Stochastic modeling of coal gasification combined cycle systems: Cost models for selected integrated gasification combined cycle (IGCC) systems

    SciTech Connect

    Frey, H.C.; Rubin, E.S.

    1990-06-01

    This report documents cost models developed for selected integrated gasification combined cycle (IGCC) systems. The objective is to obtain a series of capital and operating cost models that can be integrated with an existing set of IGCC process performance models developed at the US Department of Energy Morgantown Energy Technology Center. These models are implemented in ASPEN, a Fortran-based process simulator. Under a separate task, a probabilistic modeling capability has been added to the ASPEN simulator, facilitating analysis of uncertainties in new process performance and cost (Diwekar and Rubin, 1989). One application of the cost models presented here is to explicitly characterize uncertainties in capital and annual costs, supplanting the traditional approach of incorporating uncertainty via a contingency factor. The IGCC systems selected by DOE/METC for cost model development include the following: KRW gasifier with cold gas cleanup; KRW gasifier with hot gas cleanup; and Lurgi gasifier with hot gas cleanup. For each technology, the cost model includes both capital and annual costs. The capital cost models estimate the costs of each major plant section as a function of key performance and design parameters. A standard cost method based on the Electric Power Research Institute (EPRI) Technical Assessment Guide (1986) was adopted. The annual cost models are based on operating and maintenance labor requirements, maintenance material requirements, the costs of utilities and reagent consumption, and credits from byproduct sales. Uncertainties in cost parameters are identified for both capital and operating cost models. Appendices contain cost models for the above three IGCC systems, a number of operating trains subroutines, range checking subroutines, and financial subroutines. 88 refs., 69 figs., 21 tabs.

  11. Continuing professional education and the selection of candidates: the case for a tripartite model.

    PubMed

    Ellis, L B

    2000-02-01

    This paper argues the case for a tripartite model involving the manager educator and practitioner in the selection of candidates to programmes of continuing professional education (CPE). Nurse educators are said to play a key link in the education practice chain (Pendleton & Myles 1991), yet with the introduction of a market philosophy for education, the educator appears to have little, if any, influence over the selection of CPE candidates. Empirical studies on the value of an effective system for identifying the educational needs of the individual and the locality are unequivocal in specifying the benefits of a collaborative selection process (Larcombe & Maggs 1991). However, there are few studies that offer a model of collaboration and fewer still on how to operationalize such a model. This paper presents the policy and legislative context of CPE leading to the development of a market philosophy. The tension between educational reforms such as life-long learning and diminishing and finite resources are highlighted. These strategic issues provide the backdrop and rationale for considering the process for identifying CPE needs, and the characteristics of an effective system as suggested in the literature. Finally, this paper outlines recommendations for a partnership between the manager practitioner and educationalist in the selection of CPE candidates. PMID:11148842

  12. 'Chain pooling' model selection for two-level fixed effects factorial experiments

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1980-01-01

    As many as three iterated statistical model deletion procedures are considered for an experiment. Population model coefficients were chosen to simulate a saturated factorial experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation (neighborhood of 65 percent), (2) strategy to be used in anticipation of small coefficients of variation (4 percent or less), and (3) a security regret strategy to be used in the absence of such prior knowledge.

  13. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  14. Catalytic conversion reactions in nanoporous systems with concentration-dependent selectivity: Statistical mechanical modeling

    NASA Astrophysics Data System (ADS)

    García, Andrés; Wang, Jing; Windus, Theresa L.; Sadow, Aaron D.; Evans, James W.

    2016-05-01

    Statistical mechanical modeling is developed to describe a catalytic conversion reaction A →Bc or Bt with concentration-dependent selectivity of the products, Bc or Bt, where reaction occurs inside catalytic particles traversed by narrow linear nanopores. The associated restricted diffusive transport, which in the extreme case is described by single-file diffusion, naturally induces strong concentration gradients. Furthermore, by comparing kinetic Monte Carlo simulation results with analytic treatments, selectivity is shown to be impacted by strong spatial correlations induced by restricted diffusivity in the presence of reaction and also by a subtle clustering of reactants, A .

  15. Catalytic conversion reactions in nanoporous systems with concentration-dependent selectivity: Statistical mechanical modeling

    DOE PAGESBeta

    Garcia, Andres; Wang, Jing; Windus, Theresa L.; Sadow, Aaron D.; Evans, James W.

    2016-05-20

    Statistical mechanical modeling is developed to describe a catalytic conversion reaction A → Bc or Bt with concentration-dependent selectivity of the products, Bc or Bt, where reaction occurs inside catalytic particles traversed by narrow linear nanopores. The associated restricted diffusive transport, which in the extreme case is described by single-file diffusion, naturally induces strong concentration gradients. Hence, by comparing kinetic Monte Carlo simulation results with analytic treatments, selectivity is shown to be impacted by strong spatial correlations induced by restricted diffusivity in the presence of reaction and also by a subtle clustering of reactants, A.

  16. Selection between Michaelis-Menten and target-mediated drug disposition pharmacokinetic models.

    PubMed

    Yan, Xiaoyu; Mager, Donald E; Krzyzanski, Wojciech

    2010-02-01

    Target-mediated drug disposition (TMDD) models have been applied to describe the pharmacokinetics of drugs whose distribution and/or clearance are affected by its target due to high binding affinity and limited capacity. The Michaelis-Menten (M-M) model has also been frequently used to describe the pharmacokinetics of such drugs. The purpose of this study is to investigate conditions for equivalence between M-M and TMDD pharmacokinetic models and provide guidelines for selection between these two approaches. Theoretical derivations were used to determine conditions under which M-M and TMDD pharmacokinetic models are equivalent. Computer simulations and model fitting were conducted to demonstrate these conditions. Typical M-M and TMDD profiles were simulated based on literature data for an anti-CD4 monoclonal antibody (TRX1) and phenytoin administered intravenously. Both models were fitted to data and goodness of fit criteria were evaluated for model selection. A case study of recombinant human erythropoietin was conducted to qualify results. A rapid binding TMDD model is equivalent to the M-M model if total target density R ( tot ) is constant, and R ( tot ) K ( D ) /(K ( D ) + C) ( 2 ) < 1 where K ( D ) represents the dissociation constant and C is the free drug concentration. Under these conditions, M-M parameters are defined as: V ( max ) = k ( int ) R ( tot ) V ( c ) and K ( m ) = K ( D ) where k ( int ) represents an internalization rate constant, and V ( c ) is the volume of the central compartment. R ( tot ) is constant if and only if k ( int ) = k ( deg,) where k ( deg ) is a degradation rate constant. If the TMDD model predictions are not sensitive to k ( int ) or k ( deg ) parameters, the condition of R ( tot ) K ( D ) /(K ( D ) + C) ( 2 ) < 1 alone can preserve the equivalence between rapid binding TMDD and M-M models. The model selection process for drugs that exhibit TMDD should involve a full mechanistic model as well as reduced models. The best model

  17. Disentangling the formation of contrasting tree-line physiognomies combining model selection and Bayesian parameterization for simulation models.

    PubMed

    Martínez, Isabel; Wiegand, Thorsten; Camarero, J Julio; Batllori, Enric; Gutiérrez, Emilia

    2011-05-01

    Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise. PMID:21508601

  18. Mixture regression models for closed population capture-recapture data.

    PubMed

    Tounkara, Fodé; Rivest, Louis-Paul

    2015-09-01

    In capture-recapture studies, the use of individual covariates has been recommended to get stable population estimates. However, some residual heterogeneity might still exist and ignoring such heterogeneity could lead to underestimating the population size (N). In this work, we explore two new models with capture probabilities depending on both covariates and unobserved random effects, to estimate the size of a population. Inference techniques including Horvitz-Thompson estimate and confidence intervals for the population size, are derived. The selection of a particular model is carried out using the Akaike information criterion (AIC). First, we extend the random effect model of Darroch et al. (1993, Journal of American Statistical Association 88, 1137-1148) to handle unit level covariates and discuss its limitations. The second approach is a generalization of the traditional zero-truncated binomial model that includes a random effect to account for an unobserved heterogeneity. This approach provides useful tools for inference about N, since key quantities such as moments, likelihood functions and estimates of N and their standard errors have closed form expressions. Several models for the unobserved heterogeneity are available and the marginal capture probability is expressed using the Logit and the complementary Log-Log link functions. The sensitivity of the inference to the specification of a model is also investigated through simulations. A numerical example is presented. We compare the performance of the proposed estimator with that obtained under model Mh of Huggins (1989 Biometrika 76, 130-140). PMID:25963047

  19. Biodegradation and cometabolic modeling of selected beta blockers during ammonia oxidation.

    PubMed

    Sathyamoorthy, Sandeep; Chandran, Kartik; Ramsburg, C Andrew

    2013-11-19

    Accurate prediction of pharmaceutical concentrations in wastewater effluents requires that the specific biochemical processes responsible for pharmaceutical biodegradation be elucidated and integrated within any modeling framework. The fate of three selected beta blockers-atenolol, metoprolol, and sotalol-was examined during nitrification using batch experiments to develop and evaluate a new cometabolic process-based (CPB) model. CPB model parameters describe biotransformation during and after ammonia oxidation for specific biomass populations and are designed to be integrated within the Activated Sludge Models framework. Metoprolol and sotalol were not biodegraded by the nitrification enrichment culture employed herein. Biodegradation of atenolol was observed and linked to the activity of ammonia-oxidizing bacteria (AOB) and heterotrophs but not nitrite-oxidizing bacteria. Results suggest that the role of AOB in atenolol degradation may be disproportionately more significant than is otherwise suggested by their lower relative abundance in typical biological treatment processes. Atenolol was observed to competitively inhibit AOB growth in our experiments, though model simulations suggest inhibition is most relevant at atenolol concentrations greater than approximately 200 ng·L(-1). CPB model parameters were found to be relatively insensitive to biokinetic parameter selection suggesting the model approach may hold utility for describing pharmaceutical biodegradation during biological wastewater treatment. PMID:24112027

  20. An Optimization Model for the Selection of Bus-Only Lanes in a City

    PubMed Central

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers’ route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  1. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  2. Impacts of land cover data selection and trait parameterisation on dynamic modelling of species' range expansion.

    PubMed

    Heikkinen, Risto K; Bocedi, Greta; Kuussaari, Mikko; Heliölä, Janne; Leikola, Niko; Pöyry, Juha; Travis, Justin M J

    2014-01-01

    Dynamic models for range expansion provide a promising tool for assessing species' capacity to respond to climate change by shifting their ranges to new areas. However, these models include a number of uncertainties which may affect how successfully they can be applied to climate change oriented conservation planning. We used RangeShifter, a novel dynamic and individual-based modelling platform, to study two potential sources of such uncertainties: the selection of land cover data and the parameterization of key life-history traits. As an example, we modelled the range expansion dynamics of two butterfly species, one habitat specialist (Maniola jurtina) and one generalist (Issoria lathonia). Our results show that projections of total population size, number of occupied grid cells and the mean maximal latitudinal range shift were all clearly dependent on the choice made between using CORINE land cover data vs. using more detailed grassland data from three alternative national databases. Range expansion was also sensitive to the parameterization of the four considered life-history traits (magnitude and probability of long-distance dispersal events, population growth rate and carrying capacity), with carrying capacity and magnitude of long-distance dispersal showing the strongest effect. Our results highlight the sensitivity of dynamic species population models to the selection of existing land cover data and to uncertainty in the model parameters and indicate that these need to be carefully evaluated before the models are applied to conservation planning. PMID:25265281

  3. An improved swarm optimization for parameter estimation and biological model selection.

    PubMed

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  4. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  5. Geological feature selection in reservoir modelling and history matching with Multiple Kernel Learning

    NASA Astrophysics Data System (ADS)

    Demyanov, V.; Backhouse, L.; Christie, M.

    2015-12-01

    There is a continuous challenge in identifying and propagating geologically realistic features into reservoir models. Many of the contemporary geostatistical algorithms are limited by various modelling assumptions, like stationarity or Gaussianity. Another related challenge is to ensure the realistic geological features introduced into a geomodel are preserved during the model update in history matching studies, when the model properties are tuned to fit the flow response to production data. The above challenges motivate exploration and application of other statistical approaches to build and calibrate reservoir models, in particular, methods based on statistical learning. The paper proposes a novel data driven approach - Multiple Kernel Learning (MKL) - for modelling porous property distributions in sub-surface reservoirs. Multiple Kernel Learning aims to extract relevant spatial features from spatial patterns and to combine them in a non-linear way. This ability allows to handle multiple geological scenarios, which represent different spatial scales and a range of modelling concepts/assumptions. Multiple Kernel Learning is not restricted by deterministic or statistical modelling assumptions and, therefore, is more flexible for modelling heterogeneity at different scales and integrating data and knowledge. We demonstrate an MKL application to a problem of history matching based on a diverse prior information embedded into a range of possible geological scenarios. MKL was able to select the most influential prior geological scenarios and fuse the selected spatial features into a multi-scale property model. The MKL was applied to Brugge history matching benchmark example by calibrating the parameters of the MKL reservoir model parameters to production data. The history matching results were compared to the ones obtained from other contemporary approaches - EnKF and kernel PCA with stochastic optimisation.

  6. Automatic Model Selection for 3d Reconstruction of Buildings from Satellite Imagary

    NASA Astrophysics Data System (ADS)

    Partovi, T.; Arefi, H.; Krauß, T.; Reinartz, P.

    2013-09-01

    Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM) generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  7. Commentary on Factorial versus Typological Models: Complementary Evidence in the Model Selection Process

    ERIC Educational Resources Information Center

    Samuelsen, Karen

    2012-01-01

    The notion that there is often no clear distinction between factorial and typological models (von Davier, Naemi, & Roberts, this issue) is sound. As von Davier et al. state, theory often indicates a preference between these models; however the statistical criteria by which these are delineated offer much less clarity. In many ways the procedure…

  8. Forward-in-Time, Spatially Explicit Modeling Software to Simulate Genetic Lineages Under Selection

    PubMed Central

    Currat, Mathias; Gerbault, Pascale; Di, Da; Nunes, José M.; Sanchez-Mazas, Alicia

    2015-01-01

    SELECTOR is a software package for studying the evolution of multiallelic genes under balancing or positive selection while simulating complex evolutionary scenarios that integrate demographic growth and migration in a spatially explicit population framework. Parameters can be varied both in space and time to account for geographical, environmental, and cultural heterogeneity. SELECTOR can be used within an approximate Bayesian computation estimation framework. We first describe the principles of SELECTOR and validate the algorithms by comparing its outputs for simple models with theoretical expectations. Then, we show how it can be used to investigate genetic differentiation of loci under balancing selection in interconnected demes with spatially heterogeneous gene flow. We identify situations in which balancing selection reduces genetic differentiation between population groups compared with neutrality and explain conflicting outcomes observed for human leukocyte antigen loci. These results and three previously published applications demonstrate that SELECTOR is efficient and robust for building insight into human settlement history and evolution. PMID:26949332

  9. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    NASA Technical Reports Server (NTRS)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  10. Quantification and deconvolution of asymmetric LC-MS peaks using the bi-Gaussian mixture model and statistical model selection

    PubMed Central

    2010-01-01

    Background Liquid chromatography-mass spectrometry (LC-MS) is one of the major techniques for the quantification of metabolites in complex biological samples. Peak modeling is one of the key components in LC-MS data pre-processing. Results To quantify asymmetric peaks with high noise level, we developed an estimation procedure using the bi-Gaussian function. In addition, to accurately quantify partially overlapping peaks, we developed a deconvolution method using the bi-Gaussian mixture model combined with statistical model selection. Conclusions Using extensive simulations and real data, we demonstrated the advantage of the bi-Gaussian mixture model over the Gaussian mixture model and the method of kernel smoothing combined with signal summation in peak quantification and deconvolution. The method is implemented in the R package apLCMS: http://www.sph.emory.edu/apLCMS/. PMID:21073736

  11. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    PubMed

    Roffler, Gretchen H; Schwartz, Michael K; Pilgrim, Kristy L; Talbot, Sandra L; Sage, George K; Adams, Layne G; Luikart, Gordon

    2016-07-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is important to consider explicit assumptions and spatial scales of measurement. We calculated pairwise genetic distance among 301 Dall's sheep (Ovis dalli dalli) in southcentral Alaska using an intensive noninvasive sampling effort and 15 microsatellite loci. We used multiple regression of distance matrices to assess the correlation of pairwise genetic distance and landscape resistance derived from an RSF, and combinations of landscape features hypothesized to influence dispersal. Dall's sheep gene flow was positively correlated with steep slopes, moderate peak normalized difference vegetation indices (NDVI), and open land cover. Whereas RSF covariates were significant in predicting genetic distance, the RSF model itself was not significantly correlated with Dall's sheep gene flow, suggesting that certain habitat features important during summer (rugged terrain, mid-range elevation) were not influential to effective dispersal. This work underscores that consideration of both habitat selection and landscape genetics models may be useful in developing management strategies to both meet the immediate survival of a species and allow for long-term genetic connectivity. PMID:27330556

  12. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  13. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    USGS Publications Warehouse

    Roffler, Gretchen H.; Schwartz, Michael K.; Pilgrim, Kristy L.; Talbot, Sandra; Sage, Kevin; Adams, Layne G.; Luikart, Gordon

    2016-01-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is important to consider explicit assumptions and spatial scales of measurement. We calculated pairwise genetic distance among 301 Dall's sheep (Ovis dalli dalli) in southcentral Alaska using an intensive noninvasive sampling effort and 15 microsatellite loci. We used multiple regression of distance matrices to assess the correlation of pairwise genetic distance and landscape resistance derived from an RSF, and combinations of landscape features hypothesized to influence dispersal. Dall's sheep gene flow was positively correlated with steep slopes, moderate peak normalized difference vegetation indices (NDVI), and open land cover. Whereas RSF covariates were significant in predicting genetic distance, the RSF model itself was not significantly correlated with Dall's sheep gene flow, suggesting that certain habitat features important during summer (rugged terrain, mid-range elevation) were not influential to effective dispersal. This work underscores that consideration of both habitat selection and landscape genetics models may be useful in developing management strategies to both meet the immediate survival of a species and allow for long-term genetic connectivity.

  14. A Dynamical Model of Hierarchical Selection and Coordination in Speech Planning

    PubMed Central

    Tilsen, Sam

    2013-01-01

    Studies of the control of complex sequential movements have dissociated two aspects of movement planning: control over the sequential selection of movement plans, and control over the precise timing of movement execution. This distinction is particularly relevant in the production of speech: utterances contain sequentially ordered words and syllables, but articulatory movements are often executed in a non-sequential, overlapping manner with precisely coordinated relative timing. This study presents a hybrid dynamical model in which competitive activation controls selection of movement plans and coupled oscillatory systems govern coordination. The model departs from previous approaches by ascribing an important role to competitive selection of articulatory plans within a syllable. Numerical simulations show that the model reproduces a variety of speech production phenomena, such as effects of preparation and utterance composition on reaction time, and asymmetries in patterns of articulatory timing associated with onsets and codas. The model furthermore provides a unified understanding of a diverse group of phonetic and phonological phenomena which have not previously been related. PMID:23638147

  15. Consideration in selecting crops for the human-rated life support system: a Linear Programming model

    NASA Technical Reports Server (NTRS)

    Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.

  16. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    PubMed Central

    Gabere, Musa Nur; Hussein, Mohamed Aly; Aziz, Mohammad Azhar

    2016-01-01

    Purpose There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC). The selection of important features is a crucial step before training a classifier. Methods In this study, we built a model that uses support vector machine (SVM) to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR) technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid). Results The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF), Bayes net (BN), multilayer perceptron (MLP), naïve Bayes (NB), reduced error pruning tree (REPT), and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP). Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1, MMP7, and TGFB1 were predicted to be CRC biomarkers. Conclusion This model could be used to further develop a diagnostic tool for predicting CRC based on gene expression data from patient samples. PMID:27330311

  17. Efficient spiking neural network model of pattern motion selectivity in visual cortex.

    PubMed

    Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L

    2014-07-01

    Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available. PMID:24497233

  18. Photometry and models of selected main belt asteroids. IX. Introducing interactive service for asteroid models (ISAM)

    NASA Astrophysics Data System (ADS)

    Marciniak, A.; Bartczak, P.; Santana-Ros, T.; Michałowski, T.; Antonini, P.; Behrend, R.; Bembrick, C.; Bernasconi, L.; Borczyk, W.; Colas, F.; Coloma, J.; Crippa, R.; Esseiva, N.; Fagas, M.; Fauvaud, M.; Fauvaud, S.; Ferreira, D. D. M.; Hein Bertelsen, R. P.; Higgins, D.; Hirsch, R.; Kajava, J. J. E.; Kamiński, K.; Kryszczyńska, A.; Kwiatkowski, T.; Manzini, F.; Michałowski, J.; Michałowski, M. J.; Paschke, A.; Polińska, M.; Poncy, R.; Roy, R.; Santacana, G.; Sobkowiak, K.; Stasik, M.; Starczewski, S.; Velichko, F.; Wucher, H.; Zafar, T.

    2012-09-01

    Context. The shapes and spin states of asteroids observed with photometric techniques can be reconstructed using the lightcurve inversion method. The resultant models can then be confirmed or exploited further by other techniques, such as adaptive optics, radar, thermal infrared, stellar occultations, or space probe imaging. Aims: During our ongoing work to increase the set of asteroids with known spin and shape parameters, there appeared a need for displaying the model plane-of-sky orientations for specific epochs to compare models from different techniques. It would also be instructive to be able to track how the complex lightcurves are produced by various asteroid shapes. Methods: Basing our analysis on an extensive photometric observational dataset, we obtained eight asteroid models with the convex lightcurve inversion method. To enable comparison of the photometric models with those from other observing/modelling techniques, we created an on-line service where we allow the inversion models to be orientated interactively. Results: Our sample of objects is quite representative, containing both relatively fast and slow rotators with highly and lowly inclined spin axes. With this work, we increase the sample of asteroid spin and shape models based on disk-integrated photometry to over 200. Three of the shape models obtained here are confirmed by the stellar occultation data; this also allowed independent determinations of their sizes to be made. Conclusions: The ISAM service can be widely exploited for past and future asteroid observations with various, complementary techniques and for asteroid dimension determination. http://isam.astro.amu.edu.pl Photometric data are only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/545/A131

  19. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID

  20. Modulation depth estimation and variable selection in state-space models for neural interfaces.

    PubMed

    Malik, Wasim Q; Hochberg, Leigh R; Donoghue, John P; Brown, Emery N

    2015-02-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID

  1. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  2. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  3. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  4. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  5. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE PAGESBeta

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  6. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    SciTech Connect

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidate inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.

  7. Multi-scale modelling of ovarian follicular development: From follicular morphogenesis to selection for ovulation.

    PubMed

    Monniaux, Danielle; Michel, Philippe; Postel, Marie; Clément, Frédérique

    2016-06-01

    In this review, we present multi-scale mathematical models of ovarian follicular development that are based on the embedding of physiological mechanisms into the cell scale. During basal follicular development, follicular growth operates through an increase in the oocyte size concomitant with the proliferation of its surrounding granulosa cells. We have developed a spatio-temporal model of follicular morphogenesis explaining how the interactions between the oocyte and granulosa cells need to be properly balanced to shape the follicle. During terminal follicular development, the ovulatory follicle is selected amongst a cohort of simultaneously growing follicles. To address this process of follicle selection, we have developed a model giving a continuous and deterministic description of follicle development, adapted to high numbers of cells and based on the dynamical and hormonally regulated repartition of granulosa cells into different cell states, namely proliferation, differentiation and apoptosis. This model takes into account the hormonal feedback loop involving the growing ovarian follicles and the pituitary gland, and enables the exploration of mechanisms regulating the number of ovulations at each ovarian cycle. Both models are useful for addressing ovarian physio-pathological situations. Moreover, they can be proposed as generic modelling environments to study various developmental processes and cell interaction mechanisms. PMID:26856895

  8. Selective of informative metabolites using random forests based on model population analysis.

    PubMed

    Huang, Jian-Hua; Yan, Jun; Wu, Qing-Hua; Duarte Ferro, Miguel; Yi, Lun-Zhao; Lu, Hong-Mei; Xu, Qing-Song; Liang, Yi-Zeng

    2013-12-15

    One of the main goals of metabolomics studies is to discover informative metabolites or biomarkers, which may be used to diagnose diseases and to find out pathology. Sophisticated feature selection approaches are required to extract the information hidden in such complex 'omics' data. In this study, it is proposed a new and robust selective method by combining random forests (RF) with model population analysis (MPA), for selecting informative metabolites from three metabolomic datasets. According to the contribution to the classification accuracy, the metabolites were classified into three kinds: informative, no-informative, and interfering metabolites. Based on the proposed method, some informative metabolites were selected for three datasets; further analyses of these metabolites between healthy and diseased groups were then performed, showing by T-test that the P values for all these selected metabolites were lower than 0.05. Moreover, the informative metabolites identified by the current method were demonstrated to be correlated with the clinical outcome under investigation. The source codes of MPA-RF in Matlab can be freely downloaded from http://code.google.com/p/my-research-list/downloads/list. PMID:24209380

  9. Noise assisted excitation energy transfer in a linear model of a selectivity filter backbone strand.

    PubMed

    Bassereh, Hassan; Salari, Vahid; Shahbazi, Farhad

    2015-07-15

    In this paper, we investigate the effect of noise and disorder on the efficiency of excitation energy transfer (EET) in a N = 5 sites linear chain with 'static' dipole-dipole couplings. In fact, here, the disordered chain is a toy model for one strand of the selectivity filter backbone in ion channels. It has recently been discussed that the presence of quantum coherence in the selectivity filter is possible and can play a role in mediating ion-conduction and ion-selectivity in the selectivity filter. The question is 'how a quantum coherence can be effective in such structures while the environment of the channel is dephasing (i.e. noisy)?' Basically, we expect that the presence of the noise should have a destructive effect in the quantum transport. In fact, we show that such expectation is valid for ordered chains. However, our results indicate that introducing the dephasing in the disordered chains leads to the weakening of the localization effects, arising from the multiple back-scatterings due to the randomness, and then increases the efficiency of quantum energy transfer. Thus, the presence of noise is crucial for the enhancement of EET efficiency in disordered chains. We also show that the contribution of both classical and quantum mechanical effects are required to improve the speed of energy transfer along the chain. Our analysis may help for better understanding of fast and efficient functioning of the selectivity filters in ion channels. PMID:26061758

  10. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis

    PubMed Central

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard’s Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments. PMID:26606388

  11. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    PubMed

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments. PMID:26606388

  12. LiCABEDS II. Modeling of ligand selectivity for G-protein-coupled cannabinoid receptors.

    PubMed

    Ma, Chao; Wang, Lirong; Yang, Peng; Myint, Kyaw Z; Xie, Xiang-Qun

    2013-01-28

    The cannabinoid receptor subtype 2 (CB2) is a promising therapeutic target for blood cancer, pain relief, osteoporosis, and immune system disease. The recent withdrawal of Rimonabant, which targets another closely related cannabinoid receptor (CB1), accentuates the importance of selectivity for the development of CB2 ligands in order to minimize their effects on the CB1 receptor. In our previous study, LiCABEDS (Ligand Classifier of Adaptively Boosting Ensemble Decision Stumps) was reported as a generic ligand classification algorithm for the prediction of categorical molecular properties. Here, we report extension of the application of LiCABEDS to the modeling of cannabinoid ligand selectivity with molecular fingerprints as descriptors. The performance of LiCABEDS was systematically compared with another popular classification algorithm, support vector machine (SVM), according to prediction precision and recall rate. In addition, the examination of LiCABEDS models revealed the difference in structure diversity of CB1 and CB2 selective ligands. The structure determination from data mining could be useful for the design of novel cannabinoid lead compounds. More importantly, the potential of LiCABEDS was demonstrated through successful identification of newly synthesized CB2 selective compounds. PMID:23278450

  13. Experiment and modeling of exit-selecting behaviors during a building evacuation

    NASA Astrophysics Data System (ADS)

    Fang, Zhiming; Song, Weiguo; Zhang, Jun; Wu, Hao

    2010-02-01

    The evacuation process in a teaching building with two neighboring exits is investigated by means of experiment and modeling. The basic parameters such as flow, density and velocity of pedestrians in the exit area are measured. The exit-selecting phenomenon in the experiment is analyzed, and it is found that pedestrians prefer selecting the closer exit even though the other exit is only a little far. In order to understand the phenomenon, we reproduce the experiment process with a modified biased random walk model, in which the preference of closer exit is achieved using the drift direction and the drift force. Our simulation results afford a calibrated value of the drift force, especially when it is 0.56, there is good agreement between the simulation results and the experimental results on the number of pedestrians selecting the closer exit, the average velocity through the exits, the cumulative distribution of the instantaneous velocity and the fundamental diagram of the flow through exits. According to the further simulation results, it is found that pedestrians tend to select the exit with shorter distance to them, especially when the people density is small or medium. But if the density is large enough, the flow rates of the two exits will become comparable because of the detour behaviors. It reflects the fact that a crowd of people may not be rational to optimize the usage of multi-exits, especially in an emergency.

  14. Influence of model selection on the predicted distribution of the seagrass Zostera marina

    NASA Astrophysics Data System (ADS)

    Downie, Anna-Leena; von Numers, Mikael; Boström, Christoffer

    2013-04-01

    There is an increasing need to model the distribution of species and habitats for effective conservation planning, but there is a paucity of models for the marine environment. We used presence (131) and absence (219) records of the marine angiosperm Zostera marina L. from the archipelago of SW Finland, northern Baltic Sea, to model its distribution in a 5400 km2 area. We used depth, slope, turbidity, wave exposure and distance to sandy shores as environmental predictors, and compared a presence-absence method: generalised additive model (GAM), with a presence only method: maximum entropy (Maxent). Models were validated using semi-independent data sets. Both models performed well and described the niche of Z. marina fairly consistently, although there were differences in the way the models weighted the environmental variables, and consequently the spatial predictions differed somewhat. A notable outcome from the process was that with relatively equal model performance, the area actually predicted in geographical space can vary by twofold. The area predicted as suitable for Z. marina by the ensemble was almost half of that predicted by the GAM model by itself. The ensemble of model predictions increased the model predictive capability marginally and clearly shifted the model towards a more conservative prediction, increasing specificity, but at the same time sacrificing sensitivity. The environmental predictors selected into the final models described the potential distribution of Z. marina well and showed that in the northern Baltic the species occupies a narrow niche, typically thriving in shallow and moderately exposed to exposed locations near sandy shores. We conclude that a prediction based on a combination of model results provides a more realistic estimate of the core area suitable for Z. marina and should be the modelling approach implemented in conservation planning and management.

  15. Geographic selection bias of occurrence data influences transferability of invasive Hydrilla verticillata distribution models.

    PubMed

    Barnes, Matthew A; Jerde, Christopher L; Wittmann, Marion E; Chadderton, W Lindsay; Ding, Jianqing; Zhang, Jialiang; Purcell, Matthew; Budhathoki, Milan; Lodge, David M

    2014-06-01

    Due to socioeconomic differences, the accuracy and extent of reporting on the occurrence of native species differs among countries, which can impact the performance of species distribution models. We assessed the importance of geographical biases in occurrence data on model performance using Hydrilla verticillata as a case study. We used Maxent to predict potential North American distribution of the aquatic invasive macrophyte based upon training data from its native range. We produced a model using all available native range occurrence data, then explored the change in model performance produced by omitting subsets of training data based on political boundaries. We also compared those results with models trained on data from which a random sample of occurrence data was omitted from across the native range. Although most models accurately predicted the occurrence of H. verticillata in North America (AUC > 0.7600), data omissions influenced model predictions. Omitting data based on political boundaries resulted in larger shifts in model accuracy than omitting randomly selected occurrence data. For well-documented species like H. verticillata, missing records from single countries or ecoregions may minimally influence model predictions, but for species with fewer documented occurrences or poorly understood ranges, geographic biases could misguide predictions. Regardless of focal species, we recommend that future species distribution modeling efforts begin with a reflection on potential spatial biases of available occurrence data. Improved biodiversity surveillance and reporting will provide benefit not only in invaded ranges but also within under-reported and unexplored native ranges. PMID:25360288

  16. Geographic selection bias of occurrence data influences transferability of invasive Hydrilla verticillata distribution models

    PubMed Central

    Barnes, Matthew A; Jerde, Christopher L; Wittmann, Marion E; Chadderton, W Lindsay; Ding, Jianqing; Zhang, Jialiang; Purcell, Matthew; Budhathoki, Milan; Lodge, David M

    2014-01-01

    Due to socioeconomic differences, the accuracy and extent of reporting on the occurrence of native species differs among countries, which can impact the performance of species distribution models. We assessed the importance of geographical biases in occurrence data on model performance using Hydrilla verticillata as a case study. We used Maxent to predict potential North American distribution of the aquatic invasive macrophyte based upon training data from its native range. We produced a model using all available native range occurrence data, then explored the change in model performance produced by omitting subsets of training data based on political boundaries. We also compared those results with models trained on data from which a random sample of occurrence data was omitted from across the native range. Although most models accurately predicted the occurrence of H. verticillata in North America (AUC > 0.7600), data omissions influenced model predictions. Omitting data based on political boundaries resulted in larger shifts in model accuracy than omitting randomly selected occurrence data. For well-documented species like H. verticillata, missing records from single countries or ecoregions may minimally influence model predictions, but for species with fewer documented occurrences or poorly understood ranges, geographic biases could misguide predictions. Regardless of focal species, we recommend that future species distribution modeling efforts begin with a reflection on potential spatial biases of available occurrence data. Improved biodiversity surveillance and reporting will provide benefit not only in invaded ranges but also within under-reported and unexplored native ranges. PMID:25360288

  17. Alive SMC(2) : Bayesian model selection for low-count time series models with intractable likelihoods.

    PubMed

    Drovandi, Christopher C; McCutchan, Roy A

    2016-06-01

    In this article we present a new method for performing Bayesian parameter inference and model choice for low- count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel exact-approximate algorithm, which we refer to as alive SMC2. The advantages of this approach over competing methods are that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo, and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series, and the cumulative number of prion disease cases in mule deer. PMID:26584211

  18. SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH

    EPA Science Inventory

    While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...

  19. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  20. Selection of resistant Streptococcus pneumoniae during penicillin treatment in vitro and in three animal models.

    PubMed

    Knudsen, Jenny Dahl; Odenholt, Inga; Erlendsdottir, Helga; Gottfredsson, Magnus; Cars, Otto; Frimodt-Møller, Niels; Espersen, Frank; Kristinsson, Karl G; Gudmundsson, Sigurdur

    2003-08-01

    Pharmacokinetic (PK) and pharmacodynamic (PD) properties for the selection of resistant pneumococci were studied by using three strains of the same serotype (6B) for mixed-culture infection in time-kill experiments in vitro and in three different animal models, the mouse peritonitis, the mouse thigh, and the rabbit tissue cage models. Treatment regimens with penicillin were designed to give a wide range of T(>MIC)s, the amounts of time for which the drug concentrations in serum were above the MIC. The mixed culture of the three pneumococcal strains, 10(7) CFU of strain A (MIC of penicillin, 0.016 micro g/ml; erythromycin resistant)/ml, 10(6) CFU of strain B (MIC of penicillin, 0.25 micro g/ml)/ml, and 10(5) CFU of strain C (MIC of penicillin, 4 micro g/ml)/ml, was used in the two mouse models, and a mixture of 10(5) CFU of strain A/ml, 10(4) CFU of strain B/ml, and 10(3) CFU of strain C/ml was used in the rabbit tissue cage model. During the different treatment regimens, the differences in numbers of CFU between treated and control animals were calculated to measure the efficacies of the regimens. Selective media with erythromycin or different penicillin concentrations were used to quantify the strains separately. The efficacies of penicillin in vitro were similar when individual strains or mixed cultures were studied. The eradication of the bacteria, independent of the susceptibility of the strain or strains or the presence of the strains in a mixture or on their own, followed the well-known PK and PD rules for treatment with beta-lactams: a maximum efficacy was seen when the T(>MIC) was >40 to 50% of the observation time and the ratio of the maximum concentration of the drug in serum to the MIC was >10. It was possible in all three models to select for the less-susceptible strains by using insufficient treatments. In the rabbit tissue cage model, a regrowth of pneumococci was observed; in the mouse thigh model, the ratio between the different strains changed in