Science.gov

Sample records for aic model selection

  1. Investigating the performance of AIC in selecting phylogenetic models.

    PubMed

    Jhwueng, Dwueng-Chwuan; Huzurbazar, Snehalata; O'Meara, Brian C; Liu, Liang

    2014-08-01

    The popular likelihood-based model selection criterion, Akaike's Information Criterion (AIC), is a breakthrough mathematical result derived from information theory. AIC is an approximation to Kullback-Leibler (KL) divergence with the derivation relying on the assumption that the likelihood function has finite second derivatives. However, for phylogenetic estimation, given that tree space is discrete with respect to tree topology, the assumption of a continuous likelihood function with finite second derivatives is violated. In this paper, we investigate the relationship between the expected log likelihood of a candidate model, and the expected KL divergence in the context of phylogenetic tree estimation. We find that given the tree topology, AIC is an unbiased estimator of the expected KL divergence. However, when the tree topology is unknown, AIC tends to underestimate the expected KL divergence for phylogenetic models. Simulation results suggest that the degree of underestimation varies across phylogenetic models so that even for large sample sizes, the bias of AIC can result in selecting a wrong model. As the choice of phylogenetic models is essential for statistical phylogenetic inference, it is important to improve the accuracy of model selection criteria in the context of phylogenetics. PMID:24867284

  2. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    SciTech Connect

    Glosup, J.G.; Axelrod M.C.

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  3. Model Selection and Akaike's Information Criterion (AIC): The General Theory and Its Analytical Extensions.

    ERIC Educational Resources Information Center

    Bozdogan, Hamparsum

    1987-01-01

    This paper studies the general theory of Akaike's Information Criterion (AIC) and provides two analytical extensions. The extensions make AIC asymptotically consistent and penalize overparameterization more stringently to pick only the simplest of the two models. The criteria are applied in two Monte Carlo experiments. (Author/GDC)

  4. Factor Analysis and AIC.

    ERIC Educational Resources Information Center

    Akaike, Hirotugu

    1987-01-01

    The Akaike Information Criterion (AIC) was introduced to extend the method of maximum likelihood to the multimodel situation. Use of the AIC in factor analysis is interesting when it is viewed as the choice of a Bayesian model; thus, wider applications of AIC are possible. (Author/GDC)

  5. Truth, models, model sets, AIC, and multimodel inference: a Bayesian perspective

    USGS Publications Warehouse

    Barker, Richard J.; Link, William A.

    2015-01-01

    Statistical inference begins with viewing data as realizations of stochastic processes. Mathematical models provide partial descriptions of these processes; inference is the process of using the data to obtain a more complete description of the stochastic processes. Wildlife and ecological scientists have become increasingly concerned with the conditional nature of model-based inference: what if the model is wrong? Over the last 2 decades, Akaike's Information Criterion (AIC) has been widely and increasingly used in wildlife statistics for 2 related purposes, first for model choice and second to quantify model uncertainty. We argue that for the second of these purposes, the Bayesian paradigm provides the natural framework for describing uncertainty associated with model choice and provides the most easily communicated basis for model weighting. Moreover, Bayesian arguments provide the sole justification for interpreting model weights (including AIC weights) as coherent (mathematically self consistent) model probabilities. This interpretation requires treating the model as an exact description of the data-generating mechanism. We discuss the implications of this assumption, and conclude that more emphasis is needed on model checking to provide confidence in the quality of inference.

  6. AIC649 Induces a Bi-Phasic Treatment Response in the Woodchuck Model of Chronic Hepatitis B.

    PubMed

    Paulsen, Daniela; Weber, Olaf; Ruebsamen-Schaeff, Helga; Tennant, Bud C; Menne, Stephan

    2015-01-01

    AIC649 has been shown to directly address the antigen presenting cell arm of the host immune defense leading to a regulated cytokine release and activation of T cell responses. In the present study we analyzed the antiviral efficacy of AIC649 as well as its potential to induce functional cure in animal models for chronic hepatitis B. Hepatitis B virus transgenic mice and chronically woodchuck hepatitis virus (WHV) infected woodchucks were treated with AIC649, respectively. In the mouse system AIC649 decreased the hepatitis B virus titer as effective as the "gold standard", Tenofovir. Interestingly, AIC649-treated chronically WHV infected woodchucks displayed a bi-phasic pattern of response: The marker for functional cure--hepatitis surface antigen--first increased but subsequently decreased even after cessation of treatment to significantly reduced levels. We hypothesize that the observed bi-phasic response pattern to AIC649 treatment reflects a physiologically "concerted", reconstituted immune response against WHV and therefore may indicate a potential for inducing functional cure in HBV-infected patients. PMID:26656974

  7. Perceived challenges and attitudes to regimen and product selection from Italian haemophilia treaters: the 2013 AICE survey.

    PubMed

    Franchini, M; Coppola, A; Rocino, A; Zanon, E; Morfini, M; Accorsi, Arianna; Aru, Anna Brigida; Biasoli, Chiara; Cantori, Isabella; Castaman, Giancarlo; Cesaro, Simone; Ciabatta, Carlo; De Cristofaro, Raimondo; Delios, Grazia; Di Minno, Giovanni; D'Incà, Marco; Dragani, Alfredo; Ettorre, Cosimo Pietro; Gagliano, Fabio; Gamba, Gabriella; Gandini, Giorgio; Giordano, Paola; Giuffrida, Gaetano; Gresele, Paolo; Latella, Caterina; Luciani, Matteo; Margaglione, Maurizio; Marietta, Marco; Mazzucconi, Maria Gabriella; Messina, Maria; Molinari, Angelo Claudio; Notarangelo, Lucia Dora; Oliovecchio, Emily; Peyvandi, Flora; Piseddu, Gavino; Rossetti, Gina; Rossi, Vincenza; Santagostino, Elena; Schiavoni, Mario; Schinco, Piercarla; Serino, Maria Luisa; Tagliaferri, Annarita; Testa, Sophie

    2014-03-01

    Despite great advances in haemophilia care in the last 20 years, a number of questions on haemophilia therapy remain unanswered. These debated issues primarily involve the choice of the product type (plasma-derived vs. recombinant) for patients with different characteristics: specifically, if they were infected by blood-borne virus infections, and if they bear high or low risk of inhibitor development. In addition, the most appropriate treatment regimen in non-inhibitor and inhibitor patients compel physicians operating at the haemophilia treatment centres (HTCs) to take important therapeutic decisions, which are often based on their personal clinical experience rather than on evidence-based recommendations from published literature data. To know the opinion on the most controversial aspects in haemophilia care of Italian expert physicians, who are responsible for common clinical practice and therapeutic decisions, we have conducted a survey among the Directors of HTCs affiliated to the Italian Association of Haemophilia Centres (AICE). A questionnaire, consisting of 19 questions covering the most important topics related to haemophilia treatment, was sent to the Directors of all 52 Italian HTCs. Forty Directors out of 52 (76.9%) responded, accounting for the large majority of HTCs affiliated to the AICE throughout Italy. The results of this survey provide for the first time a picture of the attitudes towards clotting factor concentrate use and product selection of clinicians working at Italian HTCs.

  8. The role of multicollinearity in landslide susceptibility assessment by means of Binary Logistic Regression: comparison between VIF and AIC stepwise selection

    NASA Astrophysics Data System (ADS)

    Cama, Mariaelena; Cristi Nicu, Ionut; Conoscenti, Christian; Quénéhervé, Geraldine; Maerker, Michael

    2016-04-01

    Landslide susceptibility can be defined as the likelihood of a landslide occurring in a given area on the basis of local terrain conditions. In the last decades many research focused on its evaluation by means of stochastic approaches under the assumption that 'the past is the key to the future' which means that if a model is able to reproduce a known landslide spatial distribution, it will be able to predict the future locations of new (i.e. unknown) slope failures. Among the various stochastic approaches, Binary Logistic Regression (BLR) is one of the most used because it calculates the susceptibility in probabilistic terms and its results are easily interpretable from a geomorphological point of view. However, very often not much importance is given to multicollinearity assessment whose effect is that the coefficient estimates are unstable, with opposite sign and therefore difficult to interpret. Therefore, it should be evaluated every time in order to make a model whose results are geomorphologically correct. In this study the effects of multicollinearity in the predictive performance and robustness of landslide susceptibility models are analyzed. In particular, the multicollinearity is estimated by means of Variation Inflation Index (VIF) which is also used as selection criterion for the independent variables (VIF Stepwise Selection) and compared to the more commonly used AIC Stepwise Selection. The robustness of the results is evaluated through 100 replicates of the dataset. The study area selected to perform this analysis is the Moldavian Plateau where landslides are among the most frequent geomorphological processes. This area has an increasing trend of urbanization and a very high potential regarding the cultural heritage, being the place of discovery of the largest settlement belonging to the Cucuteni Culture from Eastern Europe (that led to the development of the great complex Cucuteni-Tripyllia). Therefore, identifying the areas susceptible to

  9. Model Selection for Geostatistical Models

    SciTech Connect

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  10. An extended cure model and model selection.

    PubMed

    Peng, Yingwei; Xu, Jianfeng

    2012-04-01

    We propose a novel interpretation for a recently proposed Box-Cox transformation cure model, which leads to a natural extension of the cure model. Based on the extended model, we consider an important issue of model selection between the mixture cure model and the bounded cumulative hazard cure model via the likelihood ratio test, score test and Akaike's Information Criterion (AIC). Our empirical study shows that AIC is informative and both the score test and the likelihood ratio test have adequate power to differentiate between the mixture cure model and the bounded cumulative hazard cure model when the sample size is large. We apply the tests and AIC methods to leukemia and colon cancer data to examine the appropriateness of the cure models considered for them in the literature.

  11. Model Selection Information Criteria for Non-Nested Latent Class Models.

    ERIC Educational Resources Information Center

    Lin, Ting Hsiang; Dayton, C. Mitchell

    1997-01-01

    The use of these three model selection information criteria for latent class models was studied for nonnested models: (1) Akaike's information criterion (H. Akaike, 1973) (AIC); (2) the Schwarz information (G. Schwarz, 1978) (SIC) criterion; and (3) the Bozdogan version of the AIC (CAIC) (H. Bozdogan, 1987). Situations in which each is preferable…

  12. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  13. Information-theoretic model selection and model averaging for closed-population capture-recapture studies

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1998-01-01

    Specification of an appropriate model is critical to valid stalistical inference. Given the "true model" for the data is unknown, the goal of model selection is to select a plausible approximating model that balances model bias and sampling variance. Model selection based on information criteria such as AIC or its variant AICc, or criteria like CAIC, has proven useful in a variety of contexts including the analysis of open-population capture-recapture data. These criteria have not been intensively evaluated for closed-population capture-recapture models, which are integer parameter models used to estimate population size (N), and there is concern that they will not perform well. To address this concern, we evaluated AIC, AICc, and CAIC model selection for closed-population capture-recapture models by empirically assessing the quality of inference for the population size parameter N. We found that AIC-, AICc-, and CAIC-selected models had smaller relative mean squared errors than randomly selected models, but that confidence interval coverage on N was poor unless unconditional variance estimates (which incorporate model uncertainty) were used to compute confidence intervals. Overall, AIC and AICc outperformed CAIC, and are preferred to CAIC for selection among the closed-population capture-recapture models we investigated. A model averaging approach to estimation, using AIC. AICc, or CAIC to estimate weights, was also investigated and proved superior to estimation using AIC-, AICc-, or CAIC-selected models. Our results suggested that, for model averaging, AIC or AICc. should be favored over CAIC for estimating weights.

  14. Dynamic microphones M-87/AIC and M-101/AIC and earphone H-143/AIC. [for space shuttle

    NASA Technical Reports Server (NTRS)

    Reiff, F. H.

    1975-01-01

    The electrical characteristics of the M-87/AIC and M-101/AIC dynamic microphone and H-143 earphones were tested for the purpose of establishing the relative performance levels of units supplied by four vendors. The microphones and earphones were tested for frequency response, sensitivity, linearity, impedance and noise cancellation. Test results are presented and discussed.

  15. Information criteria and selection of vibration models.

    PubMed

    Ruzek, Michal; Guyader, Jean-Louis; Pézerat, Charles

    2014-12-01

    This paper presents a method of determining an appropriate equation of motion of two-dimensional plane structures like membranes and plates from vibration response measurements. The local steady-state vibration field is used as input for the inverse problem that approximately determines the dispersion curve of the structure. This dispersion curve is then statistically treated with Akaike information criterion (AIC), which compares the experimentally measured curve to several candidate models (equations of motion). The model with the lowest AIC value is then chosen, and the utility of other models can also be assessed. This method is applied to three experimental case studies: A red cedar wood plate for musical instruments, a thick paper subjected to unknown membrane tension, and a thick composite sandwich panel. These three cases give three different situations of a model selection.

  16. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    USGS Publications Warehouse

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  17. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates.

  18. AIC Computations Using Navier-Stokes Equations on Single Image Supercomputers For Design Optimization

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru

    2004-01-01

    A procedure to accurately generate AIC using the Navier-Stokes solver including grid deformation is presented. Preliminary results show good comparisons between experiment and computed flutter boundaries for a rectangular wing. A full wing body configuration of an orbital space plane is selected for demonstration on a large number of processors. In the final paper the AIC of full wing body configuration will be computed. The scalability of the procedure on supercomputer will be demonstrated.

  19. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  20. Autonomic Intelligent Cyber Sensor (AICS) Version 1.0.1

    SciTech Connect

    2015-03-01

    The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transported internally and externally on a standards based, flexible two-level communication structure.

  1. Autonomic Intelligent Cyber Sensor (AICS) Version 1.0.1

    2015-03-01

    The Autonomic Intelligent Cyber Sensor (AICS) provides cyber security and industrial network state awareness for Ethernet based control network implementations. The AICS utilizes collaborative mechanisms based on Autonomic Research and a Service Oriented Architecture (SOA) to: 1) identify anomalous network traffic; 2) discover network entity information; 3) deploy deceptive virtual hosts; and 4) implement self-configuring modules. AICS achieves these goals by dynamically reacting to the industrial human-digital ecosystem in which it resides. Information is transportedmore » internally and externally on a standards based, flexible two-level communication structure.« less

  2. How Well Can We Detect Lineage-Specific Diversification-Rate Shifts? A Simulation Study of Sequential AIC Methods

    PubMed Central

    May, Michael R.; Moore, Brian R.

    2016-01-01

    Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081

  3. Model weights and the foundations of multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2006-01-01

    Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.

  4. Towards a Model Selection Rule for Quantum State Tomography

    NASA Astrophysics Data System (ADS)

    Scholten, Travis; Blume-Kohout, Robin

    Quantum tomography on large and/or complex systems will rely heavily on model selection techniques, which permit on-the-fly selection of small efficient statistical models (e.g. small Hilbert spaces) that accurately fit the data. Many model selection tools, such as hypothesis testing or Akaike's AIC, rely implicitly or explicitly on the Wilks Theorem, which predicts the behavior of the loglikelihood ratio statistic (LLRS) used to choose between models. We used Monte Carlo simulations to study the behavior of the LLRS in quantum state tomography, and found that it disagrees dramatically with Wilks' prediction. We propose a simple explanation for this behavior; namely, that boundaries (in state space and between models) play a significant role in determining the distribution of the LLRS. The resulting distribution is very complex, depending strongly both on the true state and the nature of the data. We consider a simplified model that neglects anistropy in the Fisher information, derive an almost analytic prediction for the mean value of the LLRS, and compare it to numerical experiments. While our simplified model outperforms the Wilks Theorem, it still does not predict the LLRS accurately, implying that alternative methods may be necessary for tomographic model selection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE.

  5. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    USGS Publications Warehouse

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  6. Mission science value-cost savings from the Advanced Imaging Communication System (AICS)

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1984-01-01

    An Advanced Imaging Communication System (AICS) was proposed in the mid-1970s as an alternative to the Voyager data/communication system architecture. The AICS achieved virtually error free communication with little loss in the downlink data rate by concatenating a powerful Reed-Solomon block code with the Voyager convolutionally coded, Viterbi decoded downlink channel. The clean channel allowed AICS sophisticated adaptive data compression techniques. Both Voyager and the Galileo mission have implemented AICS components, and the concatenated channel itself is heading for international standardization. An analysis that assigns a dollar value/cost savings to AICS mission performance gains is presented. A conservative value or savings of $3 million for Voyager, $4.5 million for Galileo, and as much as $7 to 9.5 million per mission for future projects such as the proposed Mariner Mar 2 series is shown.

  7. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  8. Tightening the Noose on LMXB Formation of MSPs: Need for AIC ?

    NASA Astrophysics Data System (ADS)

    Grindlay, J. E.; Yi, I.

    1997-12-01

    The origin of millisecond pulsars (MSPs) remains an outstanding problem despite the early and considerable evidence that they are the descendents of neutron stars spun up by accretion in low mass x-ray binaries (LMXBs). The route to MSPs from LMXBs may pass through the high luminosity Z-source LMXBs but is (severely) limited by the very limited population (and apparent birth rate) of Z-sources available. The more numerous x-ray bursters, the Atoll sources, are likely to (still) be short in numbers or birth rate but are now also found to be likely inefficient in the spin-up torques they can provide: the accretion in these relatively low accretion rate systems is likely dominated by an advection dominated flow in which matter accretes onto the NS via sub-Keplerian flows which then transfer correspondingly less angular momentum to the NS. We investigate the implications of the possible ADAF flows in low luminosity NS-LMXBs and find it is unlikely they can produce MSPs. The standard model can still be allowed if most NS-LMXBs are quiescent and undergo transient-like outbursts similar to the soft x-ray transients (which mostly contain black holes). However, apart from Cen X-4 and Aql X-1, few such systems have been found and the SXTs appear instead to be significantly deficient in NS systems. Direct production of MSPs by the accretion induced collapse (AIC) of white dwarfs has been previously suggested to solve the MSP vs. LMXB birth rate problem. We re-examine AIC models in light of the new constraints on direct LMXB production and the additional difficulty imposed by ADAF flows and constraints on SXT populations and derive constraints on the progenitor WD spin and magnetic fields.

  9. Double point source W-phase inversion: Real-time implementation and automated model selection

    NASA Astrophysics Data System (ADS)

    Nealy, Jennifer L.; Hayes, Gavin P.

    2015-12-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  10. Double point source W-phase inversion: Real-time implementation and automated model selection

    USGS Publications Warehouse

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  11. Selecting among competing models of electro-optic, infrared camera system range performance

    USGS Publications Warehouse

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  12. AN/AIC-22(V) Intercommunications Set (ICS) fiber optic link engineering analysis report

    NASA Astrophysics Data System (ADS)

    Minter, Richard; Blocksom, Roland; Ling, Christopher

    1990-08-01

    Electromagnetic interference (EMI) problems constitute a serious threat to operational Navy aircraft systems. The application of fiber optic technology is a potential solution to these problems. EMI reported problems in the P-3 patrol aircraft AN/AIC-22(V) Intercommunications System (ICS) were selected from an EMI problem database for investigation and possible application of fiber optic technology. A proof-of-concept experiment was performed to demonstrate the level of EMI immunity of fiber optics when used in an ICS. A full duplex single channel fiber optic audio link was designed and assembled from modified government furnished equipment (GFE) previously used in another Navy fiber optic application. The link was taken to the Naval Air Test Center (NATC) Patuxent River, Maryland and temporarily installed in a Naval Research Laboratory (NRL) P-3A aircraft for a side-by-side comparison test with the installed ICS. With regards to noise reduction, the fiber optic link provided a qualitative improvement over the conventional ICS. In an effort to obtain a quantitative measure of comparison, audio frequency range both with and without operation of the aircraft VHF and UHF radio transmitters.

  13. Prediction of thoracic injury severity in frontal impacts by selected anatomical morphomic variables through model-averaged logistic regression approach.

    PubMed

    Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart

    2013-11-01

    This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario

  14. Prediction of thoracic injury severity in frontal impacts by selected anatomical morphomic variables through model-averaged logistic regression approach.

    PubMed

    Zhang, Peng; Parenteau, Chantal; Wang, Lu; Holcombe, Sven; Kohoyda-Inglis, Carla; Sullivan, June; Wang, Stewart

    2013-11-01

    This study resulted in a model-averaging methodology that predicts crash injury risk using vehicle, demographic, and morphomic variables and assesses the importance of individual predictors. The effectiveness of this methodology was illustrated through analysis of occupant chest injuries in frontal vehicle crashes. The crash data were obtained from the International Center for Automotive Medicine (ICAM) database for calendar year 1996 to 2012. The morphomic data are quantitative measurements of variations in human body 3-dimensional anatomy. Morphomics are obtained from imaging records. In this study, morphomics were obtained from chest, abdomen, and spine CT using novel patented algorithms. A NASS-trained crash investigator with over thirty years of experience collected the in-depth crash data. There were 226 cases available with occupants involved in frontal crashes and morphomic measurements. Only cases with complete recorded data were retained for statistical analysis. Logistic regression models were fitted using all possible configurations of vehicle, demographic, and morphomic variables. Different models were ranked by the Akaike Information Criteria (AIC). An averaged logistic regression model approach was used due to the limited sample size relative to the number of variables. This approach is helpful when addressing variable selection, building prediction models, and assessing the importance of individual variables. The final predictive results were developed using this approach, based on the top 100 models in the AIC ranking. Model-averaging minimized model uncertainty, decreased the overall prediction variance, and provided an approach to evaluating the importance of individual variables. There were 17 variables investigated: four vehicle, four demographic, and nine morphomic. More than 130,000 logistic models were investigated in total. The models were characterized into four scenarios to assess individual variable contribution to injury risk. Scenario

  15. Individual Influence on Model Selection

    ERIC Educational Resources Information Center

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  16. Validity of methods for model selection, weighting for model uncertainty, and small sample adjustment in capture-recapture estimation.

    PubMed

    Hook, E B; Regal, R R

    1997-06-15

    In log-linear capture-recapture approaches to population size, the method of model selection may have a major effect upon the estimate. In addition, the estimate may also be very sensitive if certain cells are null or very sparse, even with the use of multiple sources. The authors evaluated 1) various approaches to the issue of model uncertainty and 2) a small sample correction for three or more sources recently proposed by Hook and Regal. The authors compared the estimates derived using 1) three different information criteria that included Akaike's Information Criterion (AIC) and two alternative formulations of the Bayesian Information Criterion (BIC), one proposed by Draper ("two pi") and one by Schwarz ("not two pi"); 2) two related methods of weighting estimates associated with models; 3) the independent model; and 4) the saturated model, with the known totals in 20 different populations studied by five separate groups of investigators. For each method, we also compared the estimate derived with or without the proposed small sample correction. At least in these data sets, the use of AIC appeared on balance to be preferable. The BIC formulation suggested by Draper appeared slightly preferable to that suggested by Schwarz. Adjustment for model uncertainty appears to improve results slightly. The proposed small sample correction appeared to diminish relative log bias but only when sparse cells were present. Otherwise, its use tended to increase relative log bias. Use of the saturated model (with or without the small sample correction) appears to be optimal if the associated interval is not uselessly large, and if one can plausibly exclude an all-source interaction. All other approaches led to an estimate that was too low by about one standard deviation.

  17. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    PubMed Central

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-01-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272

  18. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  19. Efficiency of model selection criteria in flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Calenda, G.; Volpi, E.

    2009-04-01

    The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The

  20. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively

  1. Modeling Natural Selection

    ERIC Educational Resources Information Center

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  2. Launch vehicle selection model

    NASA Technical Reports Server (NTRS)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  3. Modeling Epistasis in Genomic Selection.

    PubMed

    Jiang, Yong; Reif, Jochen C

    2015-10-01

    Modeling epistasis in genomic selection is impeded by a high computational load. The extended genomic best linear unbiased prediction (EG-BLUP) with an epistatic relationship matrix and the reproducing kernel Hilbert space regression (RKHS) are two attractive approaches that reduce the computational load. In this study, we proved the equivalence of EG-BLUP and genomic selection approaches, explicitly modeling epistatic effects. Moreover, we have shown why the RKHS model based on a Gaussian kernel captures epistatic effects among markers. Using experimental data sets in wheat and maize, we compared different genomic selection approaches and concluded that prediction accuracy can be improved by modeling epistasis for selfing species but may not for outcrossing species. PMID:26219298

  4. Model selection for logistic regression models

    NASA Astrophysics Data System (ADS)

    Duller, Christine

    2012-09-01

    Model selection for logistic regression models decides which of some given potential regressors have an effect and hence should be included in the final model. The second interesting question is whether a certain factor is heterogeneous among some subsets, i.e. whether the model should include a random intercept or not. In this paper these questions will be answered with classical as well as with Bayesian methods. The application show some results of recent research projects in medicine and business administration.

  5. Comparison of Two Gas Selection Methodologies: An Application of Bayesian Model Averaging

    SciTech Connect

    Renholds, Andrea S.; Thompson, Sandra E.; Anderson, Kevin K.; Chilton, Lawrence K.

    2006-03-31

    One goal of hyperspectral imagery analysis is the detection and characterization of plumes. Characterization includes identifying the gases in the plumes, which is a model selection problem. Two gas selection methods compared in this report are Bayesian model averaging (BMA) and minimum Akaike information criterion (AIC) stepwise regression (SR). Simulated spectral data from a three-layer radiance transfer model were used to compare the two methods. Test gases were chosen to span the types of spectra observed, which exhibit peaks ranging from broad to sharp. The size and complexity of the search libraries were varied. Background materials were chosen to either replicate a remote area of eastern Washington or feature many common background materials. For many cases, BMA and SR performed the detection task comparably in terms of the receiver operating characteristic curves. For some gases, BMA performed better than SR when the size and complexity of the search library increased. This is encouraging because we expect improved BMA performance upon incorporation of prior information on background materials and gases.

  6. Multidimensional Rasch Model Information-Based Fit Index Accuracy

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Wolfe, Edward W.

    2013-01-01

    Most research on confirmatory factor analysis using information-based fit indices (Akaike information criterion [AIC], Bayesian information criteria [BIC], bias-corrected AIC [AICc], and consistent AIC [CAIC]) has used a structural equation modeling framework. Minimal research has been done concerning application of these indices to item response…

  7. Selected Tether Applications Cost Model

    NASA Technical Reports Server (NTRS)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  8. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data

    PubMed Central

    Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172

  9. Perturbation of energy metabolism by fatty-acid derivative AIC-47 and imatinib in BCR-ABL-harboring leukemic cells.

    PubMed

    Shinohara, Haruka; Kumazaki, Minami; Minami, Yosuke; Ito, Yuko; Sugito, Nobuhiko; Kuranaga, Yuki; Taniguchi, Kohei; Yamada, Nami; Otsuki, Yoshinori; Naoe, Tomoki; Akao, Yukihiro

    2016-02-01

    In Ph-positive leukemia, imatinib brought marked clinical improvement; however, further improvement is needed to prevent relapse. Cancer cells efficiently use limited energy sources, and drugs targeting cellular metabolism improve the efficacy of therapy. In this study, we characterized the effects of novel anti-cancer fatty-acid derivative AIC-47 and imatinib, focusing on cancer-specific energy metabolism in chronic myeloid leukemia cells. AIC-47 and imatinib in combination exhibited a significant synergic cytotoxicity. Imatinib inhibited only the phosphorylation of BCR-ABL; whereas AIC-47 suppressed the expression of the protein itself. Both AIC-47 and imatinib modulated the expression of pyruvate kinase M (PKM) isoforms from PKM2 to PKM1 through the down-regulation of polypyrimidine tract-binding protein 1 (PTBP1). PTBP1 functions as alternative splicing repressor of PKM1, resulting in expression of PKM2, which is an inactive form of pyruvate kinase for the last step of glycolysis. Although inactivation of BCR-ABL by imatinib strongly suppressed glycolysis, compensatory fatty-acid oxidation (FAO) activation supported glucose-independent cell survival by up-regulating CPT1C, the rate-limiting FAO enzyme. In contrast, AIC-47 inhibited the expression of CPT1C and directly fatty-acid metabolism. These findings were also observed in the CD34(+) fraction of Ph-positive acute lymphoblastic leukemia cells. These results suggest that AIC-47 in combination with imatinib strengthened the attack on cancer energy metabolism, in terms of both glycolysis and compensatory activation of FAO.

  10. IRT Model Selection Methods for Dichotomous Items

    ERIC Educational Resources Information Center

    Kang, Taehoon; Cohen, Allan S.

    2007-01-01

    Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…

  11. [Selection of advantage prediction model for forest fire occurrence in Tahe, Daxing'an Mountain].

    PubMed

    Qin, Kai-Lun; Guo, Fu-Tao; Di, Xue-Ying; Sun, Long; Song, Yu-Hui; Wu, Yao; Pan, Jian-Feng

    2014-03-01

    This study chose zero-inflated model and Hurdle model that have been widely used in economic and social fields to model the fire occurrence in Tahe, Daxing'an Mountain. The AIC, LR and SSR were used to compare the models including zero-inflated Poisson model (ZIP), zero-inflated negative binomial model (ZINB), Poisson-Hurdle model (PH) and negative Binomial Hurdle (NBH) (two types, four models in total) so as to determine a better-fit model to predict the local fire occurrence. The results illustrated that ZINB model was superior over the other three models (ZIP, PH and NBH) based on the result of AIC and SSR tests. LR test revealed that the negative binomial distribution was suitable to both the "count" portion of zero-inflated model and hurdle model. Furthermore, this paper concluded that the zero-inflated model could better fit the fire feature of the study area according to the hypotheses of the two types of models.

  12. Student learning using the natural selection model

    NASA Astrophysics Data System (ADS)

    Mesmer, Karen Luann

    Students often have difficulty in learning natural selection, a major model in biology. This study examines what middle school students are capable of learning when taught about natural selection using a modeling approach. Students were taught the natural selection model including the components of population, variation, selective advantage, survival, heredity and reproduction. They then used the model to solve three case studies. Their learning was evaluated from responses on a pretest, a posttest and interviews. The results suggest that middle school students can identify components of the natural selection model in a Darwinian explanation, explain the significance of the components and relate them to each other as well as solve evolutionary problems using the model.

  13. A Computational Model of Selection by Consequences

    ERIC Educational Resources Information Center

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  14. Model selection for amplitude analysis

    NASA Astrophysics Data System (ADS)

    Guegan, B.; Hardin, J.; Stevens, J.; Williams, M.

    2015-09-01

    Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of possible amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a method for limiting model complexity from the data sample itself through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. An outline of how to obtain the significance of a resonance in a multivariate amplitude analysis is also provided.

  15. The Ouroboros Model, selected facets.

    PubMed

    Thomsen, Knud

    2011-01-01

    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.

  16. Adaptive Modeling Procedure Selection by Data Perturbation*

    PubMed Central

    Zhang, Yongli; Shen, Xiaotong

    2015-01-01

    Summary Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy. PMID:26640319

  17. Review and selection of unsaturated flow models

    SciTech Connect

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  18. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  19. A sensorimotor paradigm for Bayesian model selection.

    PubMed

    Genewein, Tim; Braun, Daniel A

    2012-01-01

    Sensorimotor control is thought to rely on predictive internal models in order to cope efficiently with uncertain environments. Recently, it has been shown that humans not only learn different internal models for different tasks, but that they also extract common structure between tasks. This raises the question of how the motor system selects between different structures or models, when each model can be associated with a range of different task-specific parameters. Here we design a sensorimotor task that requires subjects to compensate visuomotor shifts in a three-dimensional virtual reality setup, where one of the dimensions can be mapped to a model variable and the other dimension to the parameter variable. By introducing probe trials that are neutral in the parameter dimension, we can directly test for model selection. We found that model selection procedures based on Bayesian statistics provided a better explanation for subjects' choice behavior than simple non-probabilistic heuristics. Our experimental design lends itself to the general study of model selection in a sensorimotor context as it allows to separately query model and parameter variables from subjects. PMID:23125827

  20. Melody Track Selection Using Discriminative Language Model

    NASA Astrophysics Data System (ADS)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  1. Multiscale modelling of ovarian follicular selection.

    PubMed

    Clément, Frédérique; Monniaux, Danielle

    2013-12-01

    In mammals, the number of ovulations at each ovarian cycle is determined during the terminal phase of follicular development by a tightly controlled follicle selection process. The mechanisms underlying follicle selection take place on different scales and different levels of the gonadotropic axis. These include the endocrine loops between the ovary and the hypothalamic-pituitary complex, the dynamics of follicle populations within the ovary and the dynamics of cell populations within ovarian follicles. A compartmental modelling approach was first designed to describe the cell dynamics in the selected follicle. It laid the basis for a multiscale model formulated with partial differential equations of conservation law type, resulting in the structuring of the follicular cell populations according to cell age and cell maturity. In this model, the selection occurs as a FSH (follicle stimulating hormone)-driven competition between simultaneously developing follicles. The selection output (mono-ovulation, poly-ovulation or anovulation) results from a subtle interplay between the hypothalamus, the pituitary gland and the ovaries, combined with slight differences in the initial conditions or ageing and maturation velocities of the competing follicles. This modelling approach is proposed as a useful complement to experimental studies of follicular development and in turn, the mechanisms of follicle selection raise challenging questions on the mathematical ground.

  2. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Bacheler, N.M.; Hightower, J.E.; Burdick, S.M.; Paramore, L.M.; Buckel, J.A.; Pollock, K.H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated. ?? 2009 Elsevier B.V.

  3. Using generalized linear models to estimate selectivity from short-term recoveries of tagged red drum Sciaenops ocellatus: Effects of gear, fate, and regulation period

    USGS Publications Warehouse

    Burdick, Summer M.; Hightower, Joseph E.; Bacheler, Nathan M.; Paramore, Lee M.; Buckel, Jeffrey A.; Pollock, Kenneth H.

    2010-01-01

    Estimating the selectivity patterns of various fishing gears is a critical component of fisheries stock assessment due to the difficulty in obtaining representative samples from most gears. We used short-term recoveries (n = 3587) of tagged red drum Sciaenops ocellatus to directly estimate age- and length-based selectivity patterns using generalized linear models. The most parsimonious models were selected using AIC, and standard deviations were estimated using simulations. Selectivity of red drum was dependent upon the regulation period in which the fish was caught, the gear used to catch the fish (i.e., hook-and-line, gill nets, pound nets), and the fate of the fish upon recovery (i.e., harvested or released); models including all first-order interactions between main effects outperformed models without interactions. Selectivity of harvested fish was generally dome-shaped and shifted toward larger, older fish in response to regulation changes. Selectivity of caught-and-released red drum was highest on the youngest and smallest fish in the early and middle regulation periods, but increased on larger, legal-sized fish in the late regulation period. These results suggest that catch-and-release mortality has consistently been high for small, young red drum, but has recently become more common in larger, older fish. This method of estimating selectivity from short-term tag recoveries is valuable because it is simpler than full tag-return models, and may be more robust because yearly fishing and natural mortality rates do not need to be modeled and estimated.

  4. An Ss Model with Adverse Selection.

    ERIC Educational Resources Information Center

    House, Christopher L.; Leahy, John V.

    2004-01-01

    We present a model of the market for a used durable in which agents face fixed costs of adjustment, the magnitude of which depends on the degree of adverse selection in the secondary market. We find that, unlike typical models, the sS bands in our model contract as the variance of the shock increases. We also analyze a dynamic version of the model…

  5. Automated sample plan selection for OPC modeling

    NASA Astrophysics Data System (ADS)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  6. Posterior Predictive Bayesian Phylogenetic Model Selection

    PubMed Central

    Lewis, Paul O.; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-01-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand–Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. [Bayesian; conditional predictive ordinate; CPO; L-measure; LPML; model selection; phylogenetics; posterior predictive.] PMID:24193892

  7. On spatial mutation-selection models

    SciTech Connect

    Kondratiev, Yuri; Kutoviy, Oleksandr E-mail: kutovyi@mit.edu; Minlos, Robert Pirogov, Sergey

    2013-11-15

    We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.

  8. Comparing Smoothing Techniques for Fitting the Nonlinear Effect of Covariate in Cox Models

    PubMed Central

    Roshani, Daem; Ghaderi, Ebrahim

    2016-01-01

    Background and Objective: Cox model is a popular model in survival analysis, which assumes linearity of the covariate on the log hazard function, While continuous covariates can affect the hazard through more complicated nonlinear functional forms and therefore, Cox models with continuous covariates are prone to misspecification due to not fitting the correct functional form for continuous covariates. In this study, a smooth nonlinear covariate effect would be approximated by different spline functions. Material and Methods: We applied three flexible nonparametric smoothing techniques for nonlinear covariate effect in the Cox models: penalized splines, restricted cubic splines and natural splines. Akaike information criterion (AIC) and degrees of freedom were used to smoothing parameter selection in penalized splines model. The ability of nonparametric methods was evaluated to recover the true functional form of linear, quadratic and nonlinear functions, using different simulated sample sizes. Data analysis was carried out using R 2.11.0 software and significant levels were considered 0.05. Results: Based on AIC, the penalized spline method had consistently lower mean square error compared to others to selection of smoothed parameter. The same result was obtained with real data. Conclusion: Penalized spline smoothing method, with AIC to smoothing parameter selection, was more accurate in evaluate of relation between covariate and log hazard function than other methods. PMID:27041809

  9. A model for plant lighting system selection

    NASA Technical Reports Server (NTRS)

    Ciolkosz, D. E.; Albright, L. D.; Sager, J. C.; Langhans, R. W.

    2002-01-01

    A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.

  10. Observability in strategic models of viability selection.

    PubMed

    Gámez, M; Carreño, R; Kósa, A; Varga, Z

    2003-10-01

    Strategic models of frequency-dependent viability selection, in terms of mathematical systems theory, are considered as a dynamic observation system. Using a general sufficient condition for observability of nonlinear systems with invariant manifold, it is studied whether, observing certain phenotypic characteristics of the population, the development of its genetic state can be recovered, at least near equilibrium.

  11. Quantile hydrologic model selection and uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Pande, S.; Keyzer, M. A.; Savenije, H.; Gosain, A. K.

    2010-12-01

    Inapplicability of state of the art hydrological models due to scarce data motivates the need for a modeling approach that can be well constrained to available data and still model dominant processes. Such an approach requires embedded model relationships to be simple and parsimonious in parameters for robust model selection. Simplicity in functional relationship is also important from water management point of view if these models are to be coupled with economic system models for meaningful policy assessment. We propose a semi-distributed approach wherein we model already known dominant processes in dryland areas of Western India (evaporation, Hortonian overland flows, transmission loses and subsurface flows) in a simple but constrained manner through mathematical programming of relevant equations and constraints. Diverse data sources such as GRACE, MERRA reanalysis data, FAO soil texture map and even Indian Agricultural Census data are used. Such a modeling approach allows uncertainty quantification through quantile parameter estimation, which we present in this talk. Quantile estimation transfers uncertainty due to hydrologic model misspecification or data uncertainty, based on quantiles of residuals, onto parameters of the hydrologic model with a fixed structure. An adaptation of quantile regression to parsimonious hydrologic model estimation, this frequentist approach seeks to complement existing Bayesian approaches to model parameter and prediction uncertainty.

  12. Posterior predictive Bayesian phylogenetic model selection.

    PubMed

    Lewis, Paul O; Xie, Wangang; Chen, Ming-Hui; Fan, Yu; Kuo, Lynn

    2014-05-01

    We present two distinctly different posterior predictive approaches to Bayesian phylogenetic model selection and illustrate these methods using examples from green algal protein-coding cpDNA sequences and flowering plant rDNA sequences. The Gelfand-Ghosh (GG) approach allows dissection of an overall measure of model fit into components due to posterior predictive variance (GGp) and goodness-of-fit (GGg), which distinguishes this method from the posterior predictive P-value approach. The conditional predictive ordinate (CPO) method provides a site-specific measure of model fit useful for exploratory analyses and can be combined over sites yielding the log pseudomarginal likelihood (LPML) which is useful as an overall measure of model fit. CPO provides a useful cross-validation approach that is computationally efficient, requiring only a sample from the posterior distribution (no additional simulation is required). Both GG and CPO add new perspectives to Bayesian phylogenetic model selection based on the predictive abilities of models and complement the perspective provided by the marginal likelihood (including Bayes Factor comparisons) based solely on the fit of competing models to observed data. PMID:24193892

  13. Review and selection of unsaturated flow models

    SciTech Connect

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  14. Role of model selection criteria in geostatistical inverse estimation of statistical data- and model-parameters

    NASA Astrophysics Data System (ADS)

    Riva, Monica; Panzeri, Marco; Guadagnini, Alberto; Neuman, Shlomo P.

    2011-07-01

    We analyze theoretically the ability of model quality (sometimes termed information or discrimination) criteria such as the negative log likelihood NLL, Bayesian criteria BIC and KIC and information theoretic criteria AIC, AICc, and HIC to estimate (1) the parameter vector ? of the variogram of hydraulic log conductivity (Y = ln K), and (2) statistical parameters ? and ? proportional to head and log conductivity measurement error variances, respectively, in the context of geostatistical groundwater flow inversion. Our analysis extends the work of Hernandez et al. (2003, 2006) and Riva et al. (2009), who developed nonlinear stochastic inverse algorithms that allow conditioning estimates of steady state and transient hydraulic heads, fluxes and their associated uncertainty on information about conductivity and head data collected in a randomly heterogeneous confined aquifer. Their algorithms are based on recursive numerical approximations of exact nonlocal conditional equations describing the mean and (co)variance of groundwater flow. Log conductivity is parameterized geostatistically based on measured values at discrete locations and unknown values at discrete "pilot points." Optionally, the maximum likelihood function on which the inverse estimation of Y at pilot points is based may include a regularization term reflecting prior information about Y. The relative weight ? assigned to this term and its components ? and ?, as well as ? are evaluated separately from other model parameters to avoid bias and instability. This evaluation is done on the basis of criteria such as NLL, KIC, BIC, HIC, AIC, and AICc. We demonstrate theoretically that, whereas all these six criteria make it possible to estimate ?, KIC alone allows one to estimate validly ? and ? (and thus ?). We illustrate this discriminatory power of KIC numerically by using a differential evolution genetic search algorithm to minimize it in the context of a two-dimensional steady state groundwater flow

  15. Tracking Models for Optioned Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Liang, Jianfeng

    In this paper we study a target tracking problem for the portfolio selection involving options. In particular, the portfolio in question contains a stock index and some European style options on the index. A refined tracking-error-variance methodology is adopted to formulate this problem as a multi-stage optimization model. We derive the optimal solutions based on stochastic programming and optimality conditions. Attention is paid to the structure of the optimal payoff function, which is shown to possess rich properties.

  16. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  17. Model selection for radiochromic film dosimetry

    NASA Astrophysics Data System (ADS)

    Méndez, I.

    2015-05-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to provide better results than using Micke-Mayer perturbation models. Among the models being compared, the triple-channel model with Truncated Normal perturbations, net optical density as the response and subject to the application of lateral corrections was found to be the most accurate model. The scope of this study was circumscribed by the limits under which the models were tested. In this study, the films were irradiated with megavoltage radiotherapy beams, with doses from about 20-600 cGy, entire (8 inch  × 10 inch) films were scanned, the functional form of the sensitometric curves was a polynomial and the different lots were calibrated using the plane-based method.

  18. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.

  19. Selection and estimation for mixed graphical models

    PubMed Central

    Chen, Shizhe; Witten, Daniela M.; shojaie, Ali

    2016-01-01

    Summary We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies. PMID:27625437

  20. Model selection versus model averaging in dose finding studies.

    PubMed

    Schorning, Kirsten; Bornkamp, Björn; Bretz, Frank; Dette, Holger

    2016-09-30

    A key objective of Phase II dose finding studies in clinical drug development is to adequately characterize the dose response relationship of a new drug. An important decision is then on the choice of a suitable dose response function to support dose selection for the subsequent Phase III studies. In this paper, we compare different approaches for model selection and model averaging using mathematical properties as well as simulations. We review and illustrate asymptotic properties of model selection criteria and investigate their behavior when changing the sample size but keeping the effect size constant. In a simulation study, we investigate how the various approaches perform in realistically chosen settings. Finally, the different methods are illustrated with a recently conducted Phase II dose finding study in patients with chronic obstructive pulmonary disease. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27226147

  1. Improved modeling of GPS selective availability

    NASA Technical Reports Server (NTRS)

    Braasch, Michael S.; Fink, Annmarie; Duffus, Keith

    1994-01-01

    Selective Availability (SA) represents the dominant error source for stand-alone users of the Global Positioning System (GPS). Even for DGPS, SA mandates the update rate required for a desired level of accuracy in realtime applications. As was witnessed in the recent literature, the ability to model this error source is crucial to the proper evaluation of GPS-based systems. A variety of SA models were proposed to date; however, each has its own shortcomings. Most of these models were based on limited data sets or data which were corrupted by additional error sources. A comprehensive treatment of the problem is presented. The phenomenon of SA is discussed and a technique is presented whereby both clock and orbit components of SA are identifiable. Extensive SA data sets collected from Block 2 satellites are presented. System Identification theory then is used to derive a robust model of SA from the data. This theory also allows for the statistical analysis of SA. The stationarity of SA over time and across different satellites is analyzed and its impact on the modeling problem is discussed.

  2. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    NASA Astrophysics Data System (ADS)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  3. Bayesian model selection analysis of WMAP3

    SciTech Connect

    Parkinson, David; Mukherjee, Pia; Liddle, Andrew R.

    2006-06-15

    We present a Bayesian model selection analysis of WMAP3 data using our code CosmoNest. We focus on the density perturbation spectral index n{sub S} and the tensor-to-scalar ratio r, which define the plane of slow-roll inflationary models. We find that while the Bayesian evidence supports the conclusion that n{sub S}{ne}1, the data are not yet powerful enough to do so at a strong or decisive level. If tensors are assumed absent, the current odds are approximately 8 to 1 in favor of n{sub S}{ne}1 under our assumptions, when WMAP3 data is used together with external data sets. WMAP3 data on its own is unable to distinguish between the two models. Further, inclusion of r as a parameter weakens the conclusion against the Harrison-Zel'dovich case (n{sub S}=1, r=0), albeit in a prior-dependent way. In appendices we describe the CosmoNest code in detail, noting its ability to supply posterior samples as well as to accurately compute the Bayesian evidence. We make a first public release of CosmoNest, now available at www.cosmonest.org.

  4. Correcting for selection using frailty models.

    PubMed

    Olesen, Anne Vingaard; Parner, Erik Thorlund

    2006-05-30

    Chronic diseases are roughly speaking lifelong transitions between the states: relapse and recovery. The long-term pattern of recurrent times-to-relapse can be investigated with routine register data on hospital admissions. The relapses become readmissions to hospital, and the time spent in hospital are gaps between subsequent times-at-risk. However, problems of selection and dependent censoring arise because the calendar period of observation is limited and the study population likely to be heterogeneous. We will theoretically verify that an assumption of conditional independence of all times-at-risk and gaps, given the latent individual frailty level, allows for consistent inference in the shared frailty model. Using simulation studies, we also investigate cases where gaps (and/or staggered entry) are informative for the individual frailty. We found that the use of the shared frailty model can be extended to situations, where gaps are dependent on the frailty, but short compared to the distribution of the times-to-relapse. Our motivating example deals with the course of schizophrenia. We analysed routine register data on readmissions in almost 9000 persons with the disorder. Marginal survival curves of time-to-first-readmission, time-to-second-readmission, etc. were estimated in the shared frailty model. Based on the schizophrenia literature, the conclusion of our analysis was rather surprising: one of a stable course of disorder. PMID:16252271

  5. Modeling selective local interactions with memory

    PubMed Central

    Galante, Amanda; Levy, Doron

    2012-01-01

    Recently we developed a stochastic particle system describing local interactions between cyanobacteria. We focused on the common freshwater cyanobacteria Synechocystis sp., which are coccoidal bacteria that utilize group dynamics to move toward a light source, a motion referred to as phototaxis. We were particularly interested in the local interactions between cells that were located in low to medium density areas away from the front. The simulations of our stochastic particle system in 2D replicated many experimentally observed phenomena, such as the formation of aggregations and the quasi-random motion of cells. In this paper, we seek to develop a better understanding of group dynamics produced by this model. To facilitate this study, we replace the stochastic model with a system of ordinary differential equations describing the evolution of particles in 1D. Unlike many other models, our emphasis is on particles that selectively choose one of their neighbors as the preferred direction of motion. Furthermore, we incorporate memory by allowing persistence in the motion. We conduct numerical simulations which allow us to efficiently explore the space of parameters, in order to study the stability, size, and merging of aggregations. PMID:24244060

  6. Entropic Priors and Bayesian Model Selection

    NASA Astrophysics Data System (ADS)

    Brewer, Brendon J.; Francis, Matthew J.

    2009-12-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian ``Occam's Razor.'' This is illustrated with a simple example involving what Jaynes called a ``sure thing'' hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative ``sure thing'' hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst cosmologists: is dark energy a cosmological constant, or has it evolved with time in some way? And how shall we decide, when the data are in?

  7. Selecting a model of supersymmetry breaking mediation

    SciTech Connect

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-08-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the {mu} parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  8. Newton, Einstein, Jeffreys and Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Chettri, Samir; Batchelor, David; Campbell, William; Balakrishnan, Karthik

    2005-11-01

    In Jefferys and Berger apply Bayesian model selection to the problem of choosing between rival theories, in particular between Einstein's theory of general relativity (GR) and Newtonian gravity (NG). [1] presents a debate between Harold Jeffreys and Charles Poor regarding the observed 43''/century anomalous perhelion precession of Mercury. GR made a precise prediction of 42.98''/century while proponents of NG suggested several physical mechanisms that were eventually refuted, with the exception of a modified inverse square law. Using Bayes Factors (BF) and data available in 1921, shows that GR is preferable to NG by a factor of about 25 to 1. A scale for BF used by Jeffreys, suggests that this is positive to strong evidence for GR over modified NG but it is not very strong or even overwhelming. In this work we calculate the BF from the period 1921 till 1993. By 1960 we see that the BF, due to better data gathering techniques and advances in technology, had reached a factor of greater than 100 to 1, making GR strongly preferable to NG and by 1990 the BF reached 1000:1. Ironically while BF had reached a state of near certainty even in 1960 rival theories of gravitation were on the rise - notably the Brans-Dicke (BD) scalar-tensor theory of gravity. The BD theory is postulated in such a way that for small positive values of a scalar parameter ω, the BF would favor GR while the BF would approach unity with certainty as ω grows larger, at which point either theory would be prefered, i.e., it is a theory that cannot lose. Does this mean Bayesian model selection needs to be overthrown? This points to the need for cogent prior information guided by physics and physical experiment.

  9. Model Related Estimates of time dependent quantiles of peak flows - case study for selected catchments in Poland

    NASA Astrophysics Data System (ADS)

    Strupczewski, Witold G.; Bogdanowich, Ewa; Debele, Sisay

    2016-04-01

    Under Polish climate conditions the series of Annual Maxima (AM) flows are usually a mixture of peak flows of thaw- and rainfall- originated floods. The northern, lowland regions are dominated by snowmelt floods whilst in mountainous regions the proportion of rainfall floods is predominant. In many stations the majority of AM can be of snowmelt origin, but the greatest peak flows come from rainfall floods or vice versa. In a warming climate, precipitation is less likely to occur as snowfall. A shift from a snow- towards a rain-dominated regime results in a decreasing trend in mean and standard deviations of winter peak flows whilst rainfall floods do not exhibit any trace of non-stationarity. That is why a simple form of trends (i.e. linear trends) are more difficult to identify in AM time-series than in Seasonal Maxima (SM), usually winter season time-series. Hence it is recommended to analyse trends in SM, where a trend in standard deviation strongly influences the time -dependent upper quantiles. The uncertainty associated with the extrapolation of the trend makes it necessary to apply a relationship for trend which has time derivative tending to zero, e.g. we can assume a new climate equilibrium epoch approaching, or a time horizon is limited by the validity of the trend model. For both winter and summer SM time series, at least three distributions functions with trend model in the location, scale and shape parameters are estimated by means of the GAMLSS package using the ML-techniques. The resulting trend estimates in mean and standard deviation are mutually compared to the observed trends. Then, using AIC measures as weights, a multi-model distribution is constructed for each of two seasons separately. Further, assuming a mutual independence of the seasonal maxima, an AM model with time-dependent parameters can be obtained. The use of a multi-model approach can alleviate the effects of different and often contradictory trends obtained by using and identifying

  10. Bayesian Methods for High Dimensional Linear Models

    PubMed Central

    Mallick, Himel; Yi, Nengjun

    2013-01-01

    In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow’s Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions. PMID:24511433

  11. Bayesian Methods for High Dimensional Linear Models.

    PubMed

    Mallick, Himel; Yi, Nengjun

    2013-06-01

    In this article, we present a selective overview of some recent developments in Bayesian model and variable selection methods for high dimensional linear models. While most of the reviews in literature are based on conventional methods, we focus on recently developed methods, which have proven to be successful in dealing with high dimensional variable selection. First, we give a brief overview of the traditional model selection methods (viz. Mallow's Cp, AIC, BIC, DIC), followed by a discussion on some recently developed methods (viz. EBIC, regularization), which have occupied the minds of many statisticians. Then, we review high dimensional Bayesian methods with a particular emphasis on Bayesian regularization methods, which have been used extensively in recent years. We conclude by briefly addressing the asymptotic behaviors of Bayesian variable selection methods for high dimensional linear models under different regularity conditions.

  12. Increasing selection response by Bayesian modeling of heterogeneous environmental variances

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Heterogeneity of environmental variance among genotypes reduces selection response because genotypes with higher variance are more likely to be selected than low-variance genotypes. Modeling heterogeneous variances to obtain weighted means corrected for heterogeneous variances is difficult in likel...

  13. 42 CFR 425.600 - Selection of risk model.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Selection of risk model. 425.600 Section 425.600... Selection of risk model. (a) For its initial agreement period, an ACO may elect to operate under one of the following tracks: (1) Track 1. Under Track 1, the ACO operates under the one-sided model (as described...

  14. 42 CFR 425.600 - Selection of risk model.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Selection of risk model. 425.600 Section 425.600... Selection of risk model. (a) For its initial agreement period, an ACO may elect to operate under one of the following tracks: (1) Track 1. Under Track 1, the ACO operates under the one-sided model (as described...

  15. 42 CFR 425.600 - Selection of risk model.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Selection of risk model. 425.600 Section 425.600... Selection of risk model. (a) For its initial agreement period, an ACO may elect to operate under one of the following tracks: (1) Track 1. Under Track 1, the ACO operates under the one-sided model (as described...

  16. Validation subset selections for extrapolation oriented QSPAR models.

    PubMed

    Szántai-Kis, Csaba; Kövesdi, István; Kéri, György; Orfi, László

    2003-01-01

    One of the most important features of QSPAR models is their predictive ability. The predictive ability of QSPAR models should be checked by external validation. In this work we examined three different types of external validation set selection methods for their usefulness in in-silico screening. The usefulness of the selection methods was studied in such a way that: 1) We generated thousands of QSPR models and stored them in 'model banks'. 2) We selected a final top model from the model banks based on three different validation set selection methods. 3) We predicted large data sets, which we called 'chemical universe sets', and calculated the corresponding SEPs. The models were generated from small fractions of the available water solubility data during a GA Variable Subset Selection procedure. The external validation sets were constructed by random selections, uniformly distributed selections or by perimeter-oriented selections. We found that the best performing models on the perimeter-oriented external validation sets usually gave the best validation results when the remaining part of the available data was overwhelmingly large, i.e., when the model had to make a lot of extrapolations. We also compared the top final models obtained from external validation set selection methods in three independent and different sizes of 'chemical universe sets'.

  17. Interacting Boson Model: selected recent developments

    SciTech Connect

    Balantekin, A.B.

    1986-01-01

    The Interacting Boson Model is briefly reviewed. Recent applications of this model to the low-lying collective magnetic-dipole excitations and to the spectra of /sup 195/Ir are described. 13 refs., 3 figs.

  18. HABITAT MODELING APPROACHES FOR RESTORATION SITE SELECTION

    EPA Science Inventory

    Numerous modeling approaches have been used to develop predictive models of species-environment and species-habitat relationships. These models have been used in conservation biology and habitat or species management, but their application to restoration efforts has been minimal...

  19. Cognitive Niches: An Ecological Model of Strategy Selection

    ERIC Educational Resources Information Center

    Marewski, Julian N.; Schooler, Lael J.

    2011-01-01

    How do people select among different strategies to accomplish a given task? Across disciplines, the strategy selection problem represents a major challenge. We propose a quantitative model that predicts how selection emerges through the interplay among strategies, cognitive capacities, and the environment. This interplay carves out for each…

  20. Patterns of neutral diversity under general models of selective sweeps.

    PubMed

    Coop, Graham; Ralph, Peter

    2012-09-01

    Two major sources of stochasticity in the dynamics of neutral alleles result from resampling of finite populations (genetic drift) and the random genetic background of nearby selected alleles on which the neutral alleles are found (linked selection). There is now good evidence that linked selection plays an important role in shaping polymorphism levels in a number of species. One of the best-investigated models of linked selection is the recurrent full-sweep model, in which newly arisen selected alleles fix rapidly. However, the bulk of selected alleles that sweep into the population may not be destined for rapid fixation. Here we develop a general model of recurrent selective sweeps in a coalescent framework, one that generalizes the recurrent full-sweep model to the case where selected alleles do not sweep to fixation. We show that in a large population, only the initial rapid increase of a selected allele affects the genealogy at partially linked sites, which under fairly general assumptions are unaffected by the subsequent fate of the selected allele. We also apply the theory to a simple model to investigate the impact of recurrent partial sweeps on levels of neutral diversity and find that for a given reduction in diversity, the impact of recurrent partial sweeps on the frequency spectrum at neutral sites is determined primarily by the frequencies rapidly achieved by the selected alleles. Consequently, recurrent sweeps of selected alleles to low frequencies can have a profound effect on levels of diversity but can leave the frequency spectrum relatively unperturbed. In fact, the limiting coalescent model under a high rate of sweeps to low frequency is identical to the standard neutral model. The general model of selective sweeps we describe goes some way toward providing a more flexible framework to describe genomic patterns of diversity than is currently available. PMID:22714413

  1. Selection of Temporal Lags When Modeling Economic and Financial Processes.

    PubMed

    Matilla-Garcia, Mariano; Ojeda, Rina B; Marin, Manuel Ruiz

    2016-10-01

    This paper suggests new nonparametric statistical tools and procedures for modeling linear and nonlinear univariate economic and financial processes. In particular, the tools presented help in selecting relevant lags in the model description of a general linear or nonlinear time series; that is, nonlinear models are not a restriction. The tests seem to be robust to the selection of free parameters. We also show that the test can be used as a diagnostic tool for well-defined models. PMID:27550703

  2. Model Selection for Monitoring CO2 Plume during Sequestration

    2014-12-31

    The model selection method developed as part of this project mainly includes four steps: (1) assessing the connectivity/dynamic characteristics of a large prior ensemble of models, (2) model clustering using multidimensional scaling coupled with k-mean clustering, (3) model selection using the Bayes' rule in the reduced model space, (4) model expansion using iterative resampling of the posterior models. The fourth step expresses one of the advantages of the method: it provides a built-in means ofmore » quantifying the uncertainty in predictions made with the selected models. In our application to plume monitoring, by expanding the posterior space of models, the final ensemble of representations of geological model can be used to assess the uncertainty in predicting the future displacement of the CO2 plume. The software implementation of this approach is attached here.« less

  3. The Multilingual Lexicon: Modelling Selection and Control

    ERIC Educational Resources Information Center

    de Bot, Kees

    2004-01-01

    In this paper an overview of research on the multilingual lexicon is presented as the basis for a model for processing multiple languages. With respect to specific issues relating to the processing of more than two languages, it is suggested that there is no need to develop a specific model for such multilingual processing, but at the same time we…

  4. On Optimal Input Design and Model Selection for Communication Channels

    SciTech Connect

    Li, Yanyan; Djouadi, Seddik M; Olama, Mohammed M

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  5. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  6. Bayesian model selection for LISA pathfinder

    NASA Astrophysics Data System (ADS)

    Karnesis, Nikolaos; Nofrarias, Miquel; Sopuerta, Carlos F.; Gibert, Ferran; Armano, Michele; Audley, Heather; Congedo, Giuseppe; Diepholz, Ingo; Ferraioli, Luigi; Hewitson, Martin; Hueller, Mauro; Korsakova, Natalia; McNamara, Paul W.; Plagnol, Eric; Vitale, Stefano

    2014-03-01

    The main goal of the LISA Pathfinder (LPF) mission is to fully characterize the acceleration noise models and to test key technologies for future space-based gravitational-wave observatories similar to the eLISA concept. The data analysis team has developed complex three-dimensional models of the LISA Technology Package (LTP) experiment onboard the LPF. These models are used for simulations, but, more importantly, they will be used for parameter estimation purposes during flight operations. One of the tasks of the data analysis team is to identify the physical effects that contribute significantly to the properties of the instrument noise. A way of approaching this problem is to recover the essential parameters of a LTP model fitting the data. Thus, we want to define the simplest model that efficiently explains the observations. To do so, adopting a Bayesian framework, one has to estimate the so-called Bayes factor between two competing models. In our analysis, we use three main different methods to estimate it: the reversible jump Markov chain Monte Carlo method, the Schwarz criterion, and the Laplace approximation. They are applied to simulated LPF experiments in which the most probable LTP model that explains the observations is recovered. The same type of analysis presented in this paper is expected to be followed during flight operations. Moreover, the correlation of the output of the aforementioned methods with the design of the experiment is explored.

  7. Using multilevel models to quantify heterogeneity in resource selection

    USGS Publications Warehouse

    Wagner, T.; Diefenbach, D.R.; Christensen, S.A.; Norton, A.S.

    2011-01-01

    Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection. ?? The Wildlife Society, 2011.

  8. Constructor selection models for space construction missions

    NASA Technical Reports Server (NTRS)

    Johnson, Richard J.; Morgenthaler, George W.

    1990-01-01

    The topics are presented in viewgraph form and include the following: systems design; picking the best constructor pool; prototype model; linear programming; simulated annealing; neural networks; and genetic algorithms.

  9. Synthetic computational models of selective attention.

    PubMed

    Raffone, Antonino

    2006-11-01

    Computational modeling plays an important role to understand the mechanisms of attention. In this framework, synthetic computational models can uniquely contribute to integrate different explanatory levels and neurocognitive findings, with special reference to the integration of attention and awareness processes. Novel combined experimental and computational investigations can lead to important insights, as in the revived domain of neural correlates of attention- and awareness-related meditation states and traits.

  10. Python Program to Select HII Region Models

    NASA Astrophysics Data System (ADS)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  11. Methods for model selection in applied science and engineering.

    SciTech Connect

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  12. Use of generalised additive models to categorise continuous variables in clinical prediction

    PubMed Central

    2013-01-01

    Background In medical practice many, essentially continuous, clinical parameters tend to be categorised by physicians for ease of decision-making. Indeed, categorisation is a common practice both in medical research and in the development of clinical prediction rules, particularly where the ensuing models are to be applied in daily clinical practice to support clinicians in the decision-making process. Since the number of categories into which a continuous predictor must be categorised depends partly on the relationship between the predictor and the outcome, the need for more than two categories must be borne in mind. Methods We propose a categorisation methodology for clinical-prediction models, using Generalised Additive Models (GAMs) with P-spline smoothers to determine the relationship between the continuous predictor and the outcome. The proposed method consists of creating at least one average-risk category along with high- and low-risk categories based on the GAM smooth function. We applied this methodology to a prospective cohort of patients with exacerbated chronic obstructive pulmonary disease. The predictors selected were respiratory rate and partial pressure of carbon dioxide in the blood (PCO2), and the response variable was poor evolution. An additive logistic regression model was used to show the relationship between the covariates and the dichotomous response variable. The proposed categorisation was compared to the continuous predictor as the best option, using the AIC and AUC evaluation parameters. The sample was divided into a derivation (60%) and validation (40%) samples. The first was used to obtain the cut points while the second was used to validate the proposed methodology. Results The three-category proposal for the respiratory rate was ≤ 20;(20,24];> 24, for which the following values were obtained: AIC=314.5 and AUC=0.638. The respective values for the continuous predictor were AIC=317.1 and AUC=0.634, with no statistically

  13. A permutation approach for selecting the penalty parameter in penalized model selection.

    PubMed

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-12-01

    We describe a simple, computationally efficient, permutation-based procedure for selecting the penalty parameter in LASSO-penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), scaled sparse linear regression, and a selection method based on recently developed testing procedures for the LASSO.

  14. Modeling Selective Intergranular Oxidation of Binary Alloys

    SciTech Connect

    Xu, Zhijie; Li, Dongsheng; Schreiber, Daniel K.; Rosso, Kevin M.; Bruemmer, Stephen M.

    2015-01-07

    Intergranular attack of alloys under hydrothermal conditions is a complex problem that depends on metal and oxygen transport kinetics via solid-state and channel-like pathways to an advancing oxidation front. Experiments reveal very different rates of intergranular attack and minor element depletion distances ahead of the oxidation front for nickel-based binary alloys depending on the minor element. For example, a significant Cr depletion up to 9 µm ahead of grain boundary crack tips were documented for Ni-5Cr binary alloy, in contrast to relatively moderate Al depletion for Ni-5Al (~100s of nm). We present a mathematical kinetics model that adapts Wagner’s model for thick film growth to intergranular attack of binary alloys. The transport coefficients of elements O, Ni, Cr, and Al in bulk alloys and along grain boundaries were estimated from the literature. For planar surface oxidation, a critical concentration of the minor element can be determined from the model where the oxide of minor element becomes dominant over the major element. This generic model for simple grain boundary oxidation can predict oxidation penetration velocities and minor element depletion distances ahead of the advancing front that are comparable to experimental data. The significant distance of depletion of Cr in Ni-5Cr in contrast to the localized Al depletion in Ni-5Al can be explained by the model due to the combination of the relatively faster diffusion of Cr along the grain boundary and slower diffusion in bulk grains, relative to Al.

  15. Modeling selective intergranular oxidation of binary alloys

    NASA Astrophysics Data System (ADS)

    Xu, Zhijie; Li, Dongsheng; Schreiber, Daniel K.; Rosso, Kevin M.; Bruemmer, Stephen M.

    2015-01-01

    Intergranular attack of alloys under hydrothermal conditions is a complex problem that depends on metal and oxygen transport kinetics via solid-state and channel-like pathways to an advancing oxidation front. Experiments reveal very different rates of intergranular attack and minor element depletion distances ahead of the oxidation front for nickel-based binary alloys depending on the minor element. For example, a significant Cr depletion up to 9 μm ahead of grain boundary crack tips was documented for Ni-5Cr binary alloy, in contrast to relatively moderate Al depletion for Ni-5Al (˜100 s of nm). We present a mathematical kinetics model that adapts Wagner's model for thick film growth to intergranular attack of binary alloys. The transport coefficients of elements O, Ni, Cr, and Al in bulk alloys and along grain boundaries were estimated from the literature. For planar surface oxidation, a critical concentration of the minor element can be determined from the model where the oxide of minor element becomes dominant over the major element. This generic model for simple grain boundary oxidation can predict oxidation penetration velocities and minor element depletion distances ahead of the advancing front that are comparable to experimental data. The significant distance of depletion of Cr in Ni-5Cr in contrast to the localized Al depletion in Ni-5Al can be explained by the model due to the combination of the relatively faster diffusion of Cr along the grain boundary and slower diffusion in bulk grains, relative to Al.

  16. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    PubMed

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  17. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    PubMed

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  18. The Genealogy of Samples in Models with Selection

    PubMed Central

    Neuhauser, C.; Krone, S. M.

    1997-01-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604

  19. Selection of Hydrological Model for Waterborne Release

    SciTech Connect

    Blanchard, A.

    1999-04-21

    Following a request from the States of South Carolina and Georgia, downstream radiological consequences from postulated accidental aqueous releases at the three Savannah River Site nonreactor nuclear facilities will be examined. This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the accidents to be used in the future study.

  20. Modeling HIV-1 drug resistance as episodic directional selection.

    PubMed

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  1. Rubber yield prediction by meteorological conditions using mixed models and multi-model inference techniques

    NASA Astrophysics Data System (ADS)

    Golbon, Reza; Ogutu, Joseph Ochieng; Cotter, Marc; Sauerborn, Joachim

    2015-12-01

    Linear mixed models were developed and used to predict rubber ( Hevea brasiliensis) yield based on meteorological conditions to which rubber trees had been exposed for periods ranging from 1 day to 2 months prior to tapping events. Predictors included a range of moving averages of meteorological covariates spanning different windows of time before the date of the tapping events. Serial autocorrelation in the latex yield measurements was accounted for using random effects and a spatial generalization of the autoregressive error covariance structure suited to data sampled at irregular time intervals. Information theoretics, specifically the Akaike information criterion (AIC), AIC corrected for small sample size (AICc), and Akaike weights, was used to select models with the greatest strength of support in the data from a set of competing candidate models. The predictive performance of the selected best model was evaluated using both leave-one-out cross-validation (LOOCV) and an independent test set. Moving averages of precipitation, minimum and maximum temperature, and maximum relative humidity with a 30-day lead period were identified as the best yield predictors. Prediction accuracy expressed in terms of the percentage of predictions within a measurement error of 5 g for cross-validation and also for the test dataset was above 99 %.

  2. Model Selection on Solid Ground: Comparison of Techniques to Evaluate Bayesian Evidence

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Samaniego, L. E.; Wöhling, T.

    2014-12-01

    Bayesian model averaging (BMA) ranks and averages a set of plausible, competing models, based on their fit to available data and based on their model complexity. BMA requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model parameter space. The BME integral is highly challenging, because it is as high-dimensional as the number of model parameters. Three classes of techniques are available to evaluate BME, each with its own challenges and limitations: Exact analytical solutions are fast, but restricted by strong assumptions. Brute-force numerical evaluation is accurate, but quickly becomes computationally unfeasible. Approximations known as information criteria (AIC, BIC, KIC) are known to yield contradicting results in model ranking. We conduct a systematic comparison of available techniques to evaluate BME, including a list of numerical schemes. We highlight their common features and differences, and investigate their computational effort and accuracy. For the latter, we investigate the impact of (a) data set size and (b) overlap between the prior and the likelihood. We use a synthetic example with an exact analytical solution (as a first-time validation against a true solution), and a real-world hydrological application, where we use a brute-force Monte-Carlo method as benchmark solution. Our results show that all IC differ drastically in their quality of approximation. From all IC, the KIC evaluated at the MAP performs best, but in general none of them is satisfying for non-linear model problems. Since they share the goodness-of-fit term, the observed differences imply an inaccurate penalty for model complexity. Our findings indicate that the choice of approximation method substantially influences the accuracy of the BME estimate and, consequently, the final model ranking and BMA results.

  3. An Evaluation of Some Models for Culture-Fair Selection.

    ERIC Educational Resources Information Center

    Petersen, Nancy S.; Novick, Melvin R.

    Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…

  4. A Conditional Logit Model of Collegiate Major Selection.

    ERIC Educational Resources Information Center

    Milley, Donald J.; Bee, Richard H.

    1982-01-01

    Hypothesizes a conditional logit model of decision making to explain collegiate major selection. Results suggest a link between student environment and preference structure and preference structures and student major selection. Suggests findings are limited by use of a largely commuter student population. (KMF)

  5. Augmented Self-Modeling as an Intervention for Selective Mutism

    ERIC Educational Resources Information Center

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  6. A Working Model of Natural Selection Illustrated by Table Tennis

    ERIC Educational Resources Information Center

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  7. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  8. Fluctuating Selection Models and Mcdonald-Kreitman Type Analyses

    PubMed Central

    Gossmann, Toni I.; Waxman, David; Eyre-Walker, Adam

    2014-01-01

    It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK) style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates. PMID:24409303

  9. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  10. Ecohydrological model parameter selection for stream health evaluation.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  11. Determinants of wood thrush nest success: A multi-scale, model selection approach

    USGS Publications Warehouse

    Driscoll, M.J.L.; Donovan, T.; Mickey, R.; Howard, A.; Fleming, K.K.

    2005-01-01

    We collected data on 212 wood thrush (Hylocichla mustelina) nests in central New York from 1998 to 2000 to determine the factors that most strongly influence nest success. We used an information-theoretic approach to assess and rank 9 models that examined the relationship between nest success (i.e., the probability that a nest would successfully fledge at least 1 wood thrush offspring) and habitat conditions at different spatial scales. We found that 4 variables were significant predictors of nesting success for wood thrushes: (1) total core habitat within 5 km of a study site, (2) distance to forest-field edge, (3) total forest cover within 5 km of the study site, and (4) density and variation in diameter of trees and shrubs surrounding the nest. The coefficients of these predictors were all positive. Of the 9 models evaluated, amount of core habitat in the 5-km landscape was the best-fit model, but the vegetation structure model (i.e., the density of trees and stems surrounding a nest) was also supported by the data. Based on AIC weights, enhancement of core area is likely to be a more effective management option than any other habitat-management options explored in this study. Bootstrap analysis generally confirmed these results; core and vegetation structure models were ranked 1, 2, or 3 in over 50% of 1,000 bootstrap trials. However, bootstrap results did not point to a decisive model, which suggests that multiple habitat factors are influencing wood thrush nesting success. Due to model uncertainty, we used a model averaging approach to predict the success or failure of each nest in our dataset. This averaged model was able to correctly predict 61.1% of nest outcomes.

  12. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  13. Development, Selection, and Validation of Tumor Growth Models

    NASA Astrophysics Data System (ADS)

    Shahmoradi, Amir; Lima, Ernesto; Oden, J. Tinsley

    In recent years, a multitude of different mathematical approaches have been taken to develop multiscale models of solid tumor growth. Prime successful examples include the lattice-based, agent-based (off-lattice), and phase-field approaches, or a hybrid of these models applied to multiple scales of tumor, from subcellular to tissue level. Of overriding importance is the predictive power of these models, particularly in the presence of uncertainties. This presentation describes our attempt at developing lattice-based, agent-based and phase-field models of tumor growth and assessing their predictive power through new adaptive algorithms for model selection and model validation embodied in the Occam Plausibility Algorithm (OPAL), that brings together model calibration, determination of sensitivities of outputs to parameter variances, and calculation of model plausibilities for model selection. Institute for Computational Engineering and Sciences.

  14. A guide to Bayesian model selection for ecologists

    USGS Publications Warehouse

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  15. Computer model for selecting flow measuring structures in open channels

    SciTech Connect

    Hickey, M. J.

    1980-01-01

    Quantifying various pollutants in natural waterways has received increased emphasis with more stringent regulations issued by the Environmental Protection Agency (E.P.A.). Measuring natural stream fows presents a magnitude of problems, the most significant is the type of structure needed to measure the flows at the desired level of accuracy. A computer model designed to select a structure to best fit the engineer's needs is under development. This model, given the pertinent boundary conditions, will pinpoint the structure most suitable, if one exists. This selection process greatly facilitates the old selection process of trial and error.

  16. Performance-based selection of likelihood models for phylogeny estimation.

    PubMed

    Minin, Vladimir; Abdo, Zaid; Joyce, Paul; Sullivan, Jack

    2003-10-01

    Phylogenetic estimation has largely come to rely on explicitly model-based methods. This approach requires that a model be chosen and that that choice be justified. To date, justification has largely been accomplished through use of likelihood-ratio tests (LRTs) to assess the relative fit of a nested series of reversible models. While this approach certainly represents an important advance over arbitrary model selection, the best fit of a series of models may not always provide the most reliable phylogenetic estimates for finite real data sets, where all available models are surely incorrect. Here, we develop a novel approach to model selection, which is based on the Bayesian information criterion, but incorporates relative branch-length error as a performance measure in a decision theory (DT) framework. This DT method includes a penalty for overfitting, is applicable prior to running extensive analyses, and simultaneously compares all models being considered and thus does not rely on a series of pairwise comparisons of models to traverse model space. We evaluate this method by examining four real data sets and by using those data sets to define simulation conditions. In the real data sets, the DT method selects the same or simpler models than conventional LRTs. In order to lend generality to the simulations, codon-based models (with parameters estimated from the real data sets) were used to generate simulated data sets, which are therefore more complex than any of the models we evaluate. On average, the DT method selects models that are simpler than those chosen by conventional LRTs. Nevertheless, these simpler models provide estimates of branch lengths that are more accurate both in terms of relative error and absolute error than those derived using the more complex (yet still wrong) models chosen by conventional LRTs. This method is available in a program called DT-ModSel. PMID:14530134

  17. Applying Four Different Risk Models in Local Ore Selection

    SciTech Connect

    Richmond, Andrew

    2002-12-15

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection.

  18. Multicriteria framework for selecting a process modelling language

    NASA Astrophysics Data System (ADS)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  19. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  20. Monthly streamflow prediction in the Volta Basin of West Africa: A SISO NARMAX polynomial modelling

    NASA Astrophysics Data System (ADS)

    Amisigo, B. A.; van de Giesen, N.; Rogers, C.; Andah, W. E. I.; Friesen, J.

    Single-input-single-output (SISO) non-linear system identification techniques were employed to model monthly catchment runoff at selected gauging sites in the Volta Basin of West Africa. NARMAX (Non-linear Autoregressive Moving Average with eXogenous Input) polynomial models were fitted to basin monthly rainfall and gauging station runoff data for each of the selected sites and used to predict monthly runoff at the sites. An error reduction ratio (ERR) algorithm was used to order regressors for various combinations of input, output and noise lags (various model structures) and the significant regressors for each model selected by applying an Akaike Information Criterion (AIC) to independent rainfall-runoff validation series. Model parameters were estimated from the Matlab REGRESS function (an orthogonal least squares method). In each case, the sub-model without noise terms was fitted first followed by a fitting of the noise model. The coefficient of determination ( R-squared), the Nash-Sutcliffe Efficiency criterion (NSE) and the F statistic for the estimation (training) series were used to evaluate the significance of fit of each model to this series while model selection from the range of models fitted for each gauging site was done by examining the NSEs and the AICs of the validation series. Monthly runoff predictions from the selected models were very good, and the polynomial models appeared to have captured a good part of the rainfall-runoff non-linearity. The results indicate that the NARMAX modelling framework is suitable for monthly river runoff prediction in the Volta Basin. The several good models made available by the NARMAX modelling framework could be useful in the selection of model structures that also provide insights into the physical behaviour of the catchment rainfall-runoff system.

  1. Selection of climate change scenario data for impact modelling.

    PubMed

    Sloth Madsen, M; Maule, C Fox; MacKellar, N; Olesen, J E; Christensen, J Hesselbjerg

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented in this paper, applied to relative humidity, but it could be adopted to other variables if needed.

  2. How many parameters in the cosmological models with dark energy? [rapid communication

    NASA Astrophysics Data System (ADS)

    Godłowski, Włodzimierz; Szydłowski, Marek

    2005-09-01

    In cosmology many dramatically different scenarios in the past (big bang versus bounce) and in the future (de Sitter versus big rip) are compatible with the present day observations. This difficulties are called the degeneracy problem. We use the Akaike (AIC) and Bayesian (BIC) information criteria of model selection to avoid this degeneracy and to determine the model with such a set of parameters which gives the most preferred fit to the data. We consider seven representative scenarios, namely: the ΛCDM, CDM model with topological defect, phantom CDM model, bouncing ΛCDM model, bouncing phantom CDM model, brane ΛCDM model and model with the dynamical equation of state parameter linearized around the present epoch. Applying the information criteria to the currently available SNIa data we show that AIC indicates the flat phantom model while BIC indicates both flat phantom CDM and flat ΛCDM models. Finally we conclude that number of essential parameters chosen by dark energy models which are compared with SNIa data is two.

  3. IT vendor selection model by using structural equation model & analytical hierarchy process

    NASA Astrophysics Data System (ADS)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  4. Control of active sites in selective flocculation: I -- Mathematical model

    SciTech Connect

    Behl, S.; Moudgil, B.M.; Prakash, T.S. . Dept. of Materials Science and Engineering)

    1993-12-01

    Heteroflocculation has been determined to be another major reason for loss in selectivity for flocculation process. In a mathematical model developed earlier, conditions for controlling heteroflocculation were discussed. Blocking active sites to control selective adsorption of a flocculant oil a desirable solid surface is discussed. It has been demonstrated that the lower molecular weight fraction of a flocculant which is incapable of flocculating the particles is an efficient site blocking agent. The major application of selective flocculation has been in mineral processing but many potential uses exist in biological and other colloidal systems. These include purification of ceramic powders, separating hazardous solids from chemical waste, and removal of deleterious components from paper pulp.

  5. Spectral modeling of channel band shapes in wavelength selective switches.

    PubMed

    Pulikkaseril, Cibby; Stewart, Luke A; Roelens, Michaël A F; Baxter, Glenn W; Poole, Simon; Frisken, Steve

    2011-04-25

    A model for characterizing the spectral response of the passband of Wavelength Selective Switches (WSS) is presented. We demonstrate that, in contrast to the commonly used supergaussian model, the presented model offers a more complete match to measured results, as it is based on the physical operation of the optical system. We also demonstrate that this model is better suited for calculation of WSS channel bandwidths, as well as predicting the final bandwidth of cascaded WSS modules. Finally, we show the utility of this model in predicting channel shapes in flexible bandwidth WSS channel plans.

  6. Model Selection in Historical Research Using Approximate Bayesian Computation

    PubMed Central

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  7. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  8. Bayesian Nonlinear Model Selection for Gene Regulatory Networks

    PubMed Central

    Ni, Yang; Stingo, Francesco C.; Baladandayuthapani, Veerabhadran

    2015-01-01

    Summary Gene regulatory networks represent the regulatory relationships between genes and their products and are important for exploring and defining the underlying biological processes of cellular systems. We develop a novel framework to recover the structure of nonlinear gene regulatory networks using semiparametric spline-based directed acyclic graphical models. Our use of splines allows the model to have both flexibility in capturing nonlinear dependencies as well as control of overfitting via shrinkage, using mixed model representations of penalized splines. We propose a novel discrete mixture prior on the smoothing parameter of the splines that allows for simultaneous selection of both linear and nonlinear functional relationships as well as inducing sparsity in the edge selection. Using simulation studies, we demonstrate the superior performance of our methods in comparison with several existing approaches in terms of network reconstruction and functional selection. We apply our methods to a gene expression dataset in glioblastoma multiforme, which reveals several interesting and biologically relevant nonlinear relationships. PMID:25854759

  9. Uncertain programming models for portfolio selection with uncertain returns

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Peng, Jin; Li, Shengguo

    2015-10-01

    In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.

  10. Optimization of Parameter Selection for Partial Least Squares Model Development

    NASA Astrophysics Data System (ADS)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  11. The E-MS Algorithm: Model Selection with Incomplete Data

    PubMed Central

    Jiang, Jiming; Nguyen, Thuan; Rao, J. Sunil

    2014-01-01

    We propose a procedure associated with the idea of the E-M algorithm for model selection in the presence of missing data. The idea extends the concept of parameters to include both the model and the parameters under the model, and thus allows the model to be part of the E-M iterations. We develop the procedure, known as the E-MS algorithm, under the assumption that the class of candidate models is finite. Some special cases of the procedure are considered, including E-MS with the generalized information criteria (GIC), and E-MS with the adaptive fence (AF; Jiang et al. 2008). We prove numerical convergence of the E-MS algorithm as well as consistency in model selection of the limiting model of the E-MS convergence, for E-MS with GIC and E-MS with AF. We study the impact on model selection of different missing data mechanisms. Furthermore, we carry out extensive simulation studies on the finite-sample performance of the E-MS with comparisons to other procedures. The methodology is also illustrated on a real data analysis involving QTL mapping for an agricultural study on barley grains. PMID:26783375

  12. Usefulness of information criteria for the selection of calibration curves.

    PubMed

    Rozet, E; Ziemons, E; Marini, R D; Hubert, Ph

    2013-07-01

    The reliability of analytical results obtained with quantitative analytical methods is highly dependent on the selection of the adequate model used as the calibration curve. To select the adequate response function or model the most used and known parameter is to determine the coefficient R(2). However, it is well-known that it suffers many inconveniences, such as leading to overfitting the data. A proposed solution is to use the adjusted determination coefficient R(adj)(2) that aims at reducing this problem. However, there is another family of criteria that exists to allow the selection of an adequate model: the information criteria AIC, AICc, and BIC. These criteria have rarely been used in analytical chemistry to select the adequate calibration curve. This works aims at assessing the performance of the statistical information criteria as well as R(2) and R(adj)(2) for the selection of an adequate calibration curve. They are applied to several analytical methods covering liquid chromatographic methods, as well as electrophoretic ones involved in the analysis of active substances in biological fluids or aimed at quantifying impurities in drug substances. In addition, Monte Carlo simulations are performed to assess the efficacy of these statistical criteria to select the adequate calibration curve.

  13. Empirical extensions of the lasso penalty to reduce the false discovery rate in high-dimensional Cox regression models.

    PubMed

    Ternès, Nils; Rotolo, Federico; Michiels, Stefan

    2016-07-10

    Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd.

  14. A model of selective masking in chromatic detection.

    PubMed

    Shepard, Timothy G; Swanson, Emily A; McCarthy, Comfrey L; Eskew, Rhea T

    2016-07-01

    Narrowly tuned, selective noise masking of chromatic detection has been taken as evidence for the existence of a large number of color mechanisms (i.e., higher order color mechanisms). Here we replicate earlier observations of selective masking of tests in the (L,M) plane of cone space when the noise is placed near the corners of the detection contour. We used unipolar Gaussian blob tests with three different noise color directions, and we show that there are substantial asymmetries in the detection contours-asymmetries that would have been missed with bipolar tests such as Gabor patches. We develop a new chromatic detection model, which is based on probability summation of linear cone combinations, and incorporates a linear contrast energy versus noise power relationship that predicts how the sensitivity of these mechanisms changes with noise contrast and chromaticity. With only six unipolar color mechanisms (the same number as the cardinal model), the new model accounts for the threshold contours across the different noise conditions, including the asymmetries and the selective effects of the noises. The key for producing selective noise masking in the (L,M) plane is having more than two mechanisms with opposed L- and M-cone inputs, in which case selective masking can be produced without large numbers of color mechanisms. PMID:27442723

  15. Model selection methodology in supervised learning with evolutionary computation.

    PubMed

    Rowland, J J

    2003-11-01

    The expressive power, powerful search capability, and the explicit nature of the resulting models make evolutionary methods very attractive for supervised learning applications in bioinformatics. However, their characteristics also make them highly susceptible to overtraining or to discovering chance relationships in the data. Identification of appropriate criteria for terminating evolution and for selecting an appropriately validated model is vital. Some approaches that are commonly applied to other modelling methods are not necessarily applicable in a straightforward manner to evolutionary methods. An approach to model selection is presented that is not unduly computationally intensive. To illustrate the issues and the technique two bioinformatic datasets are used, one relating to metabolite determination and the other to disease prediction from gene expression data.

  16. Multi-agent Reinforcement Learning Model for Effective Action Selection

    NASA Astrophysics Data System (ADS)

    Youk, Sang Jo; Lee, Bong Keun

    Reinforcement learning is a sub area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. In the case of multi-agent, especially, which state space and action space gets very enormous in compared to single agent, so it needs to take most effective measure available select the action strategy for effective reinforcement learning. This paper proposes a multi-agent reinforcement learning model based on fuzzy inference system in order to improve learning collect speed and select an effective action in multi-agent. This paper verifies an effective action select strategy through evaluation tests based on Robocop Keep away which is one of useful test-beds for multi-agent. Our proposed model can apply to evaluate efficiency of the various intelligent multi-agents and also can apply to strategy and tactics of robot soccer system.

  17. Model-based sensor location selection for helicopter gearbox monitoring

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Wang, Keming; Danai, Kourosh; Lewicki, David G.

    1996-01-01

    A new methodology is introduced to quantify the significance of accelerometer locations for fault diagnosis of helicopter gearboxes. The basis for this methodology is an influence model which represents the effect of various component faults on accelerometer readings. Based on this model, a set of selection indices are defined to characterize the diagnosability of each component, the coverage of each accelerometer, and the relative redundancy between the accelerometers. The effectiveness of these indices is evaluated experimentally by measurement-fault data obtained from an OH-58A main rotor gearbox. These data are used to obtain a ranking of individual accelerometers according to their significance in diagnosis. Comparison between the experimentally obtained rankings and those obtained from the selection indices indicates that the proposed methodology offers a systematic means for accelerometer location selection.

  18. Evidence accumulation as a model for lexical selection.

    PubMed

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. PMID:26375509

  19. Second-order model selection in mixture experiments

    SciTech Connect

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  20. Research and Development into a Comprehensive Media Selection Model.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.

    1988-01-01

    Describes and discusses an instructional systems media selection model based on training effectiveness and cost effectiveness prediction techniques that were developed to support the U.S. Navy's training programs. Highlights include instructional delivery systems (IDS); decision making; trainee characteristics; training requirements analysis; an…

  1. A computational model of selection by consequences: log survivor plots.

    PubMed

    Kulubekova, Saule; McDowell, J J

    2008-06-01

    [McDowell, J.J, 2004. A computational model of selection by consequences. J. Exp. Anal. Behav. 81, 297-317] instantiated the principle of selection by consequences in a virtual organism with an evolving repertoire of possible behaviors undergoing selection, reproduction, and mutation over many generations. The process is based on the computational approach, which is non-deterministic and rules-based. The model proposes a causal account for operant behavior. McDowell found that the virtual organism consistently showed a hyperbolic relationship between response and reinforcement rates according to the quantitative law of effect. To continue validation of the computational model, the present study examined its behavior on the molecular level by comparing the virtual organism's IRT distributions in the form of log survivor plots to findings from live organisms. Log survivor plots did not show the "broken-stick" feature indicative of distinct bouts and pauses in responding, although the bend in slope of the plots became more defined at low reinforcement rates. The shape of the virtual organism's log survivor plots was more consistent with the data on reinforced responding in pigeons. These results suggest that log survivor plot patterns of the virtual organism were generally consistent with the findings from live organisms providing further support for the computational model of selection by consequences as a viable account of operant behavior.

  2. Accurate Model Selection of Relaxed Molecular Clocks in Bayesian Phylogenetics

    PubMed Central

    Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J.; Suchard, Marc A.; Lemey, Philippe

    2013-01-01

    Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike’s information criterion through Markov chain Monte Carlo (AICM), in Bayesian model selection of demographic and molecular clock models. Almost simultaneously, a Bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets. PMID:23090976

  3. A model selection approach to analysis of variance and covariance.

    PubMed

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures.

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  5. Noise Level Estimation for Model Selection in Kernel PCA Denoising.

    PubMed

    Varon, Carolina; Alzate, Carlos; Suykens, Johan A K

    2015-11-01

    One of the main challenges in unsupervised learning is to find suitable values for the model parameters. In kernel principal component analysis (kPCA), for example, these are the number of components, the kernel, and its parameters. This paper presents a model selection criterion based on distance distributions (MDDs). This criterion can be used to find the number of components and the σ(2) parameter of radial basis function kernels by means of spectral comparison between information and noise. The noise content is estimated from the statistical moments of the distribution of distances in the original dataset. This allows for a type of randomization of the dataset, without actually having to permute the data points or generate artificial datasets. After comparing the eigenvalues computed from the estimated noise with the ones from the input dataset, information is retained and maximized by a set of model parameters. In addition to the model selection criterion, this paper proposes a modification to the fixed-size method and uses the incomplete Cholesky factorization, both of which are used to solve kPCA in large-scale applications. These two approaches, together with the model selection MDD, were tested in toy examples and real life applications, and it is shown that they outperform other known algorithms. PMID:25608316

  6. Model selection as a science driver for dark energy surveys

    NASA Astrophysics Data System (ADS)

    Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin

    2006-07-01

    A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.

  7. Regression with Empirical Variable Selection: Description of a New Method and Application to Ecological Datasets

    PubMed Central

    Goodenough, Anne E.; Hart, Adam G.; Stafford, Richard

    2012-01-01

    Despite recent papers on problems associated with full-model and stepwise regression, their use is still common throughout ecological and environmental disciplines. Alternative approaches, including generating multiple models and comparing them post-hoc using techniques such as Akaike's Information Criterion (AIC), are becoming more popular. However, these are problematic when there are numerous independent variables and interpretation is often difficult when competing models contain many different variables and combinations of variables. Here, we detail a new approach, REVS (Regression with Empirical Variable Selection), which uses all-subsets regression to quantify empirical support for every independent variable. A series of models is created; the first containing the variable with most empirical support, the second containing the first variable and the next most-supported, and so on. The comparatively small number of resultant models (n = the number of predictor variables) means that post-hoc comparison is comparatively quick and easy. When tested on a real dataset – habitat and offspring quality in the great tit (Parus major) – the optimal REVS model explained more variance (higher R2), was more parsimonious (lower AIC), and had greater significance (lower P values), than full, stepwise or all-subsets models; it also had higher predictive accuracy based on split-sample validation. Testing REVS on ten further datasets suggested that this is typical, with R2 values being higher than full or stepwise models (mean improvement = 31% and 7%, respectively). Results are ecologically intuitive as even when there are several competing models, they share a set of “core” variables and differ only in presence/absence of one or two additional variables. We conclude that REVS is useful for analysing complex datasets, including those in ecology and environmental disciplines. PMID:22479605

  8. A model-based approach to selection of tag SNPs

    PubMed Central

    Nicolas, Pierre; Sun, Fengzhu; Li, Lei M

    2006-01-01

    Background Single Nucleotide Polymorphisms (SNPs) are the most common type of polymorphisms found in the human genome. Effective genetic association studies require the identification of sets of tag SNPs that capture as much haplotype information as possible. Tag SNP selection is analogous to the problem of data compression in information theory. According to Shannon's framework, the optimal tag set maximizes the entropy of the tag SNPs subject to constraints on the number of SNPs. This approach requires an appropriate probabilistic model. Compared to simple measures of Linkage Disequilibrium (LD), a good model of haplotype sequences can more accurately account for LD structure. It also provides a machinery for the prediction of tagged SNPs and thereby to assess the performances of tag sets through their ability to predict larger SNP sets. Results Here, we compute the description code-lengths of SNP data for an array of models and we develop tag SNP selection methods based on these models and the strategy of entropy maximization. Using data sets from the HapMap and ENCODE projects, we show that the hidden Markov model introduced by Li and Stephens outperforms the other models in several aspects: description code-length of SNP data, information content of tag sets, and prediction of tagged SNPs. This is the first use of this model in the context of tag SNP selection. Conclusion Our study provides strong evidence that the tag sets selected by our best method, based on Li and Stephens model, outperform those chosen by several existing methods. The results also suggest that information content evaluated with a good model is more sensitive for assessing the quality of a tagging set than the correct prediction rate of tagged SNPs. Besides, we show that haplotype phase uncertainty has an almost negligible impact on the ability of good tag sets to predict tagged SNPs. This justifies the selection of tag SNPs on the basis of haplotype informativeness, although genotyping

  9. Broken selection rule in the quantum Rabi model.

    PubMed

    Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E

    2016-01-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models. PMID:27273346

  10. Broken selection rule in the quantum Rabi model

    NASA Astrophysics Data System (ADS)

    Forn-Díaz, P.; Romero, G.; Harmans, C. J. P. M.; Solano, E.; Mooij, J. E.

    2016-06-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.

  11. Broken selection rule in the quantum Rabi model

    PubMed Central

    Forn-Díaz, P.; Romero, G.; Harmans, C. J. P. M.; Solano, E.; Mooij, J. E.

    2016-01-01

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models. PMID:27273346

  12. Broken selection rule in the quantum Rabi model.

    PubMed

    Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E

    2016-06-07

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.

  13. Models of cultural niche construction with selection and assortative mating.

    PubMed

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  14. Categorical variables with many categories are preferentially selected in bootstrap-based model selection procedures for multivariable regression models.

    PubMed

    Rospleszcz, Susanne; Janitza, Silke; Boulesteix, Anne-Laure

    2016-05-01

    Automated variable selection procedures, such as backward elimination, are commonly employed to perform model selection in the context of multivariable regression. The stability of such procedures can be investigated using a bootstrap-based approach. The idea is to apply the variable selection procedure on a large number of bootstrap samples successively and to examine the obtained models, for instance, in terms of the inclusion of specific predictor variables. In this paper, we aim to investigate a particular important problem affecting this method in the case of categorical predictor variables with different numbers of categories and to give recommendations on how to avoid it. For this purpose, we systematically assess the behavior of automated variable selection based on the likelihood ratio test using either bootstrap samples drawn with replacement or subsamples drawn without replacement from the original dataset. Our study consists of extensive simulations and a real data example from the NHANES study. Our main result is that if automated variable selection is conducted on bootstrap samples, variables with more categories are substantially favored over variables with fewer categories and over metric variables even if none of them have any effect. Importantly, variables with no effect and many categories may be (wrongly) preferred to variables with an effect but few categories. We suggest the use of subsamples instead of bootstrap samples to bypass these drawbacks.

  15. Selection of Models for Ingestion Pathway and Relocation

    SciTech Connect

    Blanchard, A.; Thompson, J.M.

    1998-11-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways.

  16. Stationary solutions for metapopulation Moran models with mutation and selection.

    PubMed

    Constable, George W A; McKane, Alan J

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation. PMID:25871148

  17. Stationary solutions for metapopulation Moran models with mutation and selection

    NASA Astrophysics Data System (ADS)

    Constable, George W. A.; McKane, Alan J.

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation.

  18. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    SciTech Connect

    Blanchard, A.

    1998-12-17

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities.

  19. Stationary solutions for metapopulation Moran models with mutation and selection.

    PubMed

    Constable, George W A; McKane, Alan J

    2015-03-01

    We construct an individual-based metapopulation model of population genetics featuring migration, mutation, selection, and genetic drift. In the case of a single "island," the model reduces to the Moran model. Using the diffusion approximation and time-scale separation arguments, an effective one-variable description of the model is developed. The effective description bears similarities to the well-mixed Moran model with effective parameters that depend on the network structure and island sizes, and it is amenable to analysis. Predictions from the reduced theory match the results from stochastic simulations across a range of parameters. The nature of the fast-variable elimination technique we adopt is further studied by applying it to a linear system, where it provides a precise description of the slow dynamics in the limit of large time-scale separation.

  20. Selection of Models for Ingestion Pathway and Relocation

    SciTech Connect

    Blanchard, A.; Thompson, J.M.

    1999-02-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways.

  1. Modeling selective pressures on phytoplankton in the global ocean.

    PubMed

    Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W

    2010-01-01

    Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and

  2. Selection between Linear Factor Models and Latent Profile Models Using Conditional Covariances

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Maraun, Michael D.

    2010-01-01

    A method for selecting between K-dimensional linear factor models and (K + 1)-class latent profile models is proposed. In particular, it is shown that the conditional covariances of observed variables are constant under factor models but nonlinear functions of the conditioning variable under latent profile models. The performance of a convenient…

  3. Visual analytics for model selection in time series analysis.

    PubMed

    Bögl, Markus; Aigner, Wolfgang; Filzmoser, Peter; Lammarsch, Tim; Miksch, Silvia; Rind, Alexander

    2013-12-01

    Model selection in time series analysis is a challenging task for domain experts in many application areas such as epidemiology, economy, or environmental sciences. The methodology used for this task demands a close combination of human judgement and automated computation. However, statistical software tools do not adequately support this combination through interactive visual interfaces. We propose a Visual Analytics process to guide domain experts in this task. For this purpose, we developed the TiMoVA prototype that implements this process based on user stories and iterative expert feedback on user experience. The prototype was evaluated by usage scenarios with an example dataset from epidemiology and interviews with two external domain experts in statistics. The insights from the experts' feedback and the usage scenarios show that TiMoVA is able to support domain experts in model selection tasks through interactive visual interfaces with short feedback cycles.

  4. Modeling Selective Elimination of Quiescent Cancer Cells from Bone Marrow

    PubMed Central

    Cavnar, Stephen P.; Rickelmann, Andrew D.; Meguiar, Kaille F.; Xiao, Annie; Dosch, Joseph; Leung, Brendan M.; Cai Lesher-Perez, Sasha; Chitta, Shashank; Luker, Kathryn E.; Takayama, Shuichi; Luker, Gary D.

    2015-01-01

    Patients with many types of malignancy commonly harbor quiescent disseminated tumor cells in bone marrow. These cells frequently resist chemotherapy and may persist for years before proliferating as recurrent metastases. To test for compounds that eliminate quiescent cancer cells, we established a new 384-well 3D spheroid model in which small numbers of cancer cells reversibly arrest in G1/G0 phase of the cell cycle when cultured with bone marrow stromal cells. Using dual-color bioluminescence imaging to selectively quantify viability of cancer and stromal cells in the same spheroid, we identified single compounds and combination treatments that preferentially eliminated quiescent breast cancer cells but not stromal cells. A treatment combination effective against malignant cells in spheroids also eliminated breast cancer cells from bone marrow in a mouse xenograft model. This research establishes a novel screening platform for therapies that selectively target quiescent tumor cells, facilitating identification of new drugs to prevent recurrent cancer. PMID:26408255

  5. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  6. Bayesian model selection applied to artificial neural networks used for water resources modeling

    NASA Astrophysics Data System (ADS)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  7. The hierarchical sparse selection model of visual crowding.

    PubMed

    Chaney, Wesley; Fischer, Jason; Whitney, David

    2014-01-01

    Because the environment is cluttered, objects rarely appear in isolation. The visual system must therefore attentionally select behaviorally relevant objects from among many irrelevant ones. A limit on our ability to select individual objects is revealed by the phenomenon of visual crowding: an object seen in the periphery, easily recognized in isolation, can become impossible to identify when surrounded by other, similar objects. The neural basis of crowding is hotly debated: while prevailing theories hold that crowded information is irrecoverable - destroyed due to over-integration in early stage visual processing - recent evidence demonstrates otherwise. Crowding can occur between high-level, configural object representations, and crowded objects can contribute with high precision to judgments about the "gist" of a group of objects, even when they are individually unrecognizable. While existing models can account for the basic diagnostic criteria of crowding (e.g., specific critical spacing, spatial anisotropies, and temporal tuning), no present model explains how crowding can operate simultaneously at multiple levels in the visual processing hierarchy, including at the level of whole objects. Here, we present a new model of visual crowding-the hierarchical sparse selection (HSS) model, which accounts for object-level crowding, as well as a number of puzzling findings in the recent literature. Counter to existing theories, we posit that crowding occurs not due to degraded visual representations in the brain, but due to impoverished sampling of visual representations for the sake of perception. The HSS model unifies findings from a disparate array of visual crowding studies and makes testable predictions about how information in crowded scenes can be accessed.

  8. ModelOMatic: fast and automated model selection between RY, nucleotide, amino acid, and codon substitution models.

    PubMed

    Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David

    2015-01-01

    Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/.

  9. Bioeconomic model and selection indices in Aberdeen Angus cattle.

    PubMed

    Campos, G S; Braccini Neto, J; Oaigen, R P; Cardoso, F F; Cobuci, J A; Kern, E L; Campos, L T; Bertoli, C D; McManus, C M

    2014-08-01

    A bioeconomic model was developed to calculate economic values for biological traits in full-cycle production systems and propose selection indices based on selection criteria used in the Brazilian Aberdeen Angus genetic breeding programme (PROMEBO). To assess the impact of changes in the performance of the traits on the profit of the production system, the initial values ​​of the traits were increased by 1%. The economic values for number of calves weaned (NCW) and slaughter weight (SW) were, respectively, R$ 6.65 and R$ 1.43/cow/year. The selection index at weaning showed a 44.77% emphasis on body weight, 14.24% for conformation, 30.36% for early maturing and 10.63% for muscle development. The eighteen-month index showed emphasis of 77.61% for body weight, 4.99% for conformation, 11.09% for early maturing, 6.10% for muscle development and 0.22% for scrotal circumference. NCW showed highest economic impact, and SW had important positive effect on the economics of the production system. The selection index proposed can be used by breeders and should contribute to greater profitability.

  10. On Model Specification and Selection of the Cox Proportional Hazards Model*

    PubMed Central

    Lin, Chen-Yen; Halabi, Susan

    2013-01-01

    Prognosis plays a pivotal role in patient management and trial design. A useful prognostic model should correctly identify important risk factors and estimate their effects. In this article, we discuss several challenges in selecting prognostic factors and estimating their effects using the Cox proportional hazards model. Although a flexible semiparametric form, the Cox’s model is not entirely exempt from model misspecification. To minimize possible misspecification, instead of imposing traditional linear assumption, flexible modeling techniques have been proposed to accommodate the nonlinear effect. We first review several existing nonparametric estimation and selection procedures and then present a numerical study to compare the performance between parametric and nonparametric procedures. We demonstrate the impact of model misspecification on variable selection and model prediction using a simulation study and a example from a phase III trial in prostate cancer. PMID:23784939

  11. UQ-Guided Selection of Physical Parameterizations in Climate Models

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Debusschere, B.; Ghan, S.; Rosa, D.; Bulaevskaya, V.; Anderson, G. J.; Chowdhary, K.; Qian, Y.; Lin, G.; Larson, V. E.; Zhang, G. J.; Randall, D. A.

    2015-12-01

    Given two or more parameterizations that represent the same physical process in a climate model, scientists are sometimes faced with difficult decisions about which scheme to choose for their simulations and analysis. These decisions are often based on subjective criteria, such as "which scheme is easier to use, is computationally less expensive, or produces results that look better?" Uncertainty quantification (UQ) and model selection methods can be used to objectively rank the performance of different physical parameterizations by increasing the preference for schemes that fit observational data better, while at the same time penalizing schemes that are overly complex or have excessive degrees-of-freedom. Following these principles, we are developing a perturbed-parameter UQ framework to assist in the selection of parameterizations for a climate model. Preliminary results will be presented on the application of the framework to assess the performance of two alternate schemes for simulating tropical deep convection (CLUBB-SILHS and ZM-trigmem) in the U.S. Dept. of Energy's ACME climate model. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, is supported by the DOE Office of Science through the Scientific Discovery Through Advanced Computing (SciDAC), and is released as LLNL-ABS-675799.

  12. Selection of Representative Models for Decision Analysis Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  13. Model selection and inference for censored lifetime medical expenditures.

    PubMed

    Johnson, Brent A; Long, Qi; Huang, Yijian; Chansky, Kari; Redman, Mary

    2016-09-01

    Identifying factors associated with increased medical cost is important for many micro- and macro-institutions, including the national economy and public health, insurers and the insured. However, assembling comprehensive national databases that include both the cost and individual-level predictors can prove challenging. Alternatively, one can use data from smaller studies with the understanding that conclusions drawn from such analyses may be limited to the participant population. At the same time, smaller clinical studies have limited follow-up and lifetime medical cost may not be fully observed for all study participants. In this context, we develop new model selection methods and inference procedures for secondary analyses of clinical trial data when lifetime medical cost is subject to induced censoring. Our model selection methods extend a theory of penalized estimating function to a calibration regression estimator tailored for this data type. Next, we develop a novel inference procedure for the unpenalized regression estimator using perturbation and resampling theory. Then, we extend this resampling plan to accommodate regularized coefficient estimation of censored lifetime medical cost and develop postselection inference procedures for the final model. Our methods are motivated by data from Southwest Oncology Group Protocol 9509, a clinical trial of patients with advanced nonsmall cell lung cancer, and our models of lifetime medical cost are specific to this population. But the methods presented in this article are built on rather general techniques and could be applied to larger databases as those data become available. PMID:26689300

  14. Selecting global climate models for regional climate change studies

    PubMed Central

    Pierce, David W.; Barnett, Tim P.; Santer, Benjamin D.; Gleckler, Peter J.

    2009-01-01

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures. PMID:19439652

  15. Selecting global climate models for regional climate change studies.

    PubMed

    Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J

    2009-05-26

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.

  16. Technology selection using multiattribute model for rice production in Indonesia

    SciTech Connect

    Abdullah, K.; Irwanto, A.K.

    1996-12-31

    The multiattribute model (MA-Model) is a method to select an appropriate technology from several alternatives based on the maximum utility expected from the application of the selected technology. One of the current agricultural problems faced by Indonesia as well as by other developing countries in the world, is the option of technology, which can ensure the sustainability of agricultural development. The amount of environmental degradation and harmful effect to human beings can be reduced to a minimum level while still gaining benefits from the agricultural business. The current study focused on an attempt to determine the optimum technology which should be applied in the future in maintaining food self-sufficiency in Indonesia. Basic data for analysis were collected from the Central Bureau of Statistics and from surveys conducted in Lampung area of South Sumatera and rice production area of West Java. The criteria of technology selection was based on: (1) Economic gains for the farmer, based on return per Ha and employment opportunity; (2) CO{sub 2} emission and the environmental effect due to the application of fertilizer and pesticides; (3) Energy efficiency in terms of the O/I ratio, and (4) Productivity in terms of working hours needed per Ha.

  17. [Applying multi-model inference to estimate growth parameters of greater lizard fish Saurida tumbil in Beibu Gulf, South China Sea].

    PubMed

    Hou, Gang; Liu, Jin-Dian; Feng, Bo; Yan, Yun-Rong; Lu, Huo-Sheng

    2014-03-01

    Age and growth parameters are key parameters in fish stock assessment and management strategies, thus it is crucial to choose an appropriate growth model for a target species. In this study, five growth models were set to fit the length-age data of greater lizard fish Saurida tumbil (n = 2046) collected monthly from December 2006 to July 2009 in the Beibu Gulf, South China Sea. The parameters for each model were estimated using the maximum likelihood method under the assumption of the additive error structure. Adjusted coefficient of determination (R2adj), root mean squared error (RMSE), Akaike's information criterion (AIC), and Bayesian information criterion (BIC) were calculated for each model for fitness selection. The results indicated that the four statistical approaches were consistent in selection of the best growth model. The MMI approach indicated that the generalized VBGF was strongly verified and made up 95.9% of the AIC weight, indicating that this function fitted the length-age data of the greater lizard fish well. The growth function was Lt = 578.49 [1-e -0.05(t-0.14) 0.361.

  18. Performance of soil particle-size distribution models for describing deposited soils adjacent to constructed dams in the China Loess Plateau

    NASA Astrophysics Data System (ADS)

    Zhao, Pei; Shao, Ming-an; Horton, Robert

    2011-02-01

    Soil particle-size distributions (PSD) have been used to estimate soil hydraulic properties. Various parametric PSD models have been proposed to describe the soil PSD from sparse experimental data. It is important to determine which PSD model best represents specific soils. Fourteen PSD models were examined in order to determine the best model for representing the deposited soils adjacent to dams in the China Loess Plateau; these were: Skaggs (S-1, S-2, and S-3), fractal (FR), Jaky (J), Lima and Silva (LS), Morgan (M), Gompertz (G), logarithm (L), exponential (E), log-exponential (LE), Weibull (W), van Genuchten type (VG) as well as Fredlund (F) models. Four-hundred and eighty samples were obtained from soils deposited in the Liudaogou catchment. The coefficient of determination (R 2), the Akaike's information criterion (AIC), and the modified AIC (mAIC) were used. Based upon R 2 and AIC, the three- and four-parameter models were both good at describing the PSDs of deposited soils, and the LE, FR, and E models were the poorest. However, the mAIC in conjunction with R 2 and AIC results indicated that the W model was optimum for describing PSD of the deposited soils for emphasizing the effect of parameter number. This analysis was also helpful for finding out which model is the best one. Our results are applicable to the China Loess Plateau.

  19. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  20. Variable selection for semiparametric mixed models in longitudinal studies.

    PubMed

    Ni, Xiao; Zhang, Daowen; Zhang, Hao Helen

    2010-03-01

    We propose a double-penalized likelihood approach for simultaneous model selection and estimation in semiparametric mixed models for longitudinal data. Two types of penalties are jointly imposed on the ordinary log-likelihood: the roughness penalty on the nonparametric baseline function and a nonconcave shrinkage penalty on linear coefficients to achieve model sparsity. Compared to existing estimation equation based approaches, our procedure provides valid inference for data with missing at random, and will be more efficient if the specified model is correct. Another advantage of the new procedure is its easy computation for both regression components and variance parameters. We show that the double-penalized problem can be conveniently reformulated into a linear mixed model framework, so that existing software can be directly used to implement our method. For the purpose of model inference, we derive both frequentist and Bayesian variance estimation for estimated parametric and nonparametric components. Simulation is used to evaluate and compare the performance of our method to the existing ones. We then apply the new method to a real data set from a lactation study.

  1. Development of solar drying model for selected Cambodian fish species.

    PubMed

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6 °C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg · h(-1). Based on coefficient of determination (R(2)), chi-square (χ(2)) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  2. Development of solar drying model for selected Cambodian fish species.

    PubMed

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6 °C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg · h(-1). Based on coefficient of determination (R(2)), chi-square (χ(2)) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  3. Development of Solar Drying Model for Selected Cambodian Fish Species

    PubMed Central

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2), chi-square (χ2) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  4. Selection Strategies for Social Influence in the Threshold Model

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  5. Analysis improves selection of rheological model for slurries

    SciTech Connect

    Moftah, K. )

    1993-10-25

    The use of a statistical index of determination can help select a fluid model to describe the rheology of oil well cement slurries. The closer the index is to unity, the better the particular model will describe the actual fluid behavior. Table 1 lists a computer program written in Quick Basic to calculate rheological parameters and an index of determination for the Bingham plastic and power law models. The points used for the calculation of the rheological parameters can be selected from the data set. The skipped points can then be introduced and the calculations continued, not restarted, to obtain the parameters for the full set of data. The two sets of results are then compared for the decision to include or exclude the added points in the regression. The program also calculates the apparent viscosity to help determine where turbulence or high gross error occurred. In addition, the program calculates the confidence interval of the rheological parameters for a 90% level of confidence.

  6. Competition and natural selection in a mathematical model of cancer.

    PubMed

    Nagy, John D

    2004-07-01

    A malignant tumor is a dynamic amalgamation of various cell phenotypes, both cancerous (parenchyma) and healthy (stroma). These diverse cells compete over resources as well as cooperate to maintain tumor viability. Therefore, tumors are both an ecological community and an integrated tissue. An understanding of how natural selection operates in this unique ecological context should expose unappreciated vulnerabilities shared by all cancers. In this study I address natural selection's role in tumor evolution by developing and exploring a mathematical model of a heterogenous primary neoplasm. The model is a system of nonlinear ordinary differential equations tracking the mass of up to two different parenchyma cell types, the mass of vascular endothelial cells from which new tumor blood vessels are built and the total length of tumor microvessels. Results predict the possibility of a hypertumor-a focus of aggressively reproducing parenchyma cells that invade and destroy part or all of the tumor, perhaps before it becomes a clinical entity. If this phenomenon occurs, then we should see examples of tumors that develop an aggressive histology but are paradoxically prone to extinction. Neuroblastoma, a common childhood cancer, may sometimes fit this pattern. In addition, this model suggests that parenchyma cell diversity can be maintained by a tissue-like integration of cells specialized to provide different services.

  7. Selection Experiments in the Penna Model for Biological Aging

    NASA Astrophysics Data System (ADS)

    Medeiros, G.; Idiart, M. A.; de Almeida, R. M. C.

    We consider the Penna model for biological aging to investigate correlations between early fertility and late life survival rates in populations at equilibrium. We consider inherited initial reproduction ages together with a reproduction cost translated in a probability that mother and offspring die at birth, depending on the mother age. For convenient sets of parameters, the equilibrated populations present genetic variability in what regards both genetically programmed death age and initial reproduction age. In the asexual Penna model, a negative correlation between early life fertility and late life survival rates naturally emerges in the stationary solutions. In the sexual Penna model, selection experiments are performed where individuals are sorted by initial reproduction age from the equilibrated populations and the separated populations are evolved independently. After a transient, a negative correlation between early fertility and late age survival rates also emerges in the sense that populations that start reproducing earlier present smaller average genetically programmed death age. These effects appear due to the age structure of populations in the steady state solution of the evolution equations. We claim that the same demographic effects may be playing an important role in selection experiments in the laboratory.

  8. Continuum model for chiral induced spin selectivity in helical molecules

    SciTech Connect

    Medina, Ernesto; González-Arraga, Luis A.; Finkelstein-Shapiro, Daniel; Mujica, Vladimiro; Berche, Bertrand

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  9. Direction selectivity in a model of the starburst amacrine cell.

    PubMed

    Tukker, John J; Taylor, W Rowland; Smith, Robert G

    2004-01-01

    The starburst amacrine cell (SBAC), found in all mammalian retinas, is thought to provide the directional inhibitory input recorded in On-Off direction-selective ganglion cells (DSGCs). While voltage recordings from the somas of SBACs have not shown robust direction selectivity (DS), the dendritic tips of these cells display direction-selective calcium signals, even when gamma-aminobutyric acid (GABAa,c) channels are blocked, implying that inhibition is not necessary to generate DS. This suggested that the distinctive morphology of the SBAC could generate a DS signal at the dendritic tips, where most of its synaptic output is located. To explore this possibility, we constructed a compartmental model incorporating realistic morphological structure, passive membrane properties, and excitatory inputs. We found robust DS at the dendritic tips but not at the soma. Two-spot apparent motion and annulus radial motion produced weak DS, but thin bars produced robust DS. For these stimuli, DS was caused by the interaction of a local synaptic input signal with a temporally delayed "global" signal, that is, an excitatory postsynaptic potential (EPSP) that spread from the activated inputs into the soma and throughout the dendritic tree. In the preferred direction the signals in the dendritic tips coincided, allowing summation, whereas in the null direction the local signal preceded the global signal, preventing summation. Sine-wave grating stimuli produced the greatest amount of DS, especially at high velocities and low spatial frequencies. The sine-wave DS responses could be accounted for by a simple mathematical model, which summed phase-shifted signals from soma and dendritic tip. By testing different artificial morphologies, we discovered DS was relatively independent of the morphological details, but depended on having a sufficient number of inputs at the distal tips and a limited electrotonic isolation. Adding voltage-gated calcium channels to the model showed that their

  10. An ecosystem model for tropical forest disturbance and selective logging

    NASA Astrophysics Data System (ADS)

    Huang, Maoyi; Asner, Gregory P.; Keller, Michael; Berry, Joseph A.

    2008-03-01

    A new three-dimensional version of the Carnegie-Ames-Stanford Approach (CASA) ecosystem model (CASA-3D) was developed to simulate regional carbon cycling in tropical forest ecosystems after disturbances such as logging. CASA-3D has the following new features: (1) an alternative approach for calculating absorbed photosynthetically active radiation (APAR) using new high-resolution satellite images of forest canopy gap fraction; (2) a pulse disturbance module to modify aboveground carbon pools following forest disturbance; (3) a regrowth module that simulates changes in community composition by considering gap phase regeneration; and (4) a radiative transfer module to simulate the dynamic three-dimensional light environment above the canopy and within gaps after forest disturbance. The model was calibrated with and tested against field observations from experimental logging plots in the Large-scale Biosphere Atmosphere Experiment in Amazonia (LBA) project. The sensitivity of key model parameters was evaluated using Monte Carlo simulations, and the uncertainties in simulated NPP and respiration associated with model parameters and meteorological variables were assessed. We found that selective logging causes changes in forest architecture and composition that result in a cascading set of impacts on the carbon cycling of rainforest ecosystems. Our model sensitivity and uncertainty analyses also highlight the paramount importance of measuring changes in canopy gap fraction from satellite data, as well as canopy light-use efficiency from ecophysiological measurements, to understand the role of forest disturbance on landscape and regional carbon cycling in tropical forests. In sum, our study suggests that CASA-3D may be suitable for regional-scale applications to assess the large-scale effects of selective logging, to provide guidance for forest management, and to understand the role of forest disturbance in regional and global climate studies.

  11. Parametric Pattern Selection in a Reaction-Diffusion Model

    PubMed Central

    Stich, Michael; Ghoshal, Gourab; Pérez-Mercader, Juan

    2013-01-01

    We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate) where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways. PMID:24204813

  12. A Neuronal Network Model for Pitch Selectivity and Representation

    PubMed Central

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  13. A Neuronal Network Model for Pitch Selectivity and Representation.

    PubMed

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions. PMID:27378900

  14. BUILDING ROBUST APPEARANCE MODELS USING ON-LINE FEATURE SELECTION

    SciTech Connect

    PORTER, REID B.; LOVELAND, ROHAN; ROSTEN, ED

    2007-01-29

    In many tracking applications, adapting the target appearance model over time can improve performance. This approach is most popular in high frame rate video applications where latent variables, related to the objects appearance (e.g., orientation and pose), vary slowly from one frame to the next. In these cases the appearance model and the tracking system are tightly integrated, and latent variables are often included as part of the tracking system's dynamic model. In this paper we describe our efforts to track cars in low frame rate data (1 frame/second) acquired from a highly unstable airborne platform. Due to the low frame rate, and poor image quality, the appearance of a particular vehicle varies greatly from one frame to the next. This leads us to a different problem: how can we build the best appearance model from all instances of a vehicle we have seen so far. The best appearance model should maximize the future performance of the tracking system, and maximize the chances of reacquiring the vehicle once it leaves the field of view. We propose an online feature selection approach to this problem and investigate the performance and computational trade-offs with a real-world dataset.

  15. Stochastic group selection model for the evolution of altruism

    NASA Astrophysics Data System (ADS)

    Silva, Ana T. C.; Fontanari, J. F.

    We study numerically and analytically a stochastic group selection model in which a population of asexually reproducing individuals, each of which can be either altruist or non-altruist, is subdivided into M reproductively isolated groups (demes) of size N. The cost associated with being altruistic is modelled by assigning the fitness 1- τ, with τ∈[0,1], to the altruists and the fitness 1 to the non-altruists. In the case that the altruistic disadvantage τ is not too large, we show that the finite M fluctuations are small and practically do not alter the deterministic results obtained for M→∞. However, for large τ these fluctuations greatly increase the instability of the altruistic demes to mutations. These results may be relevant to the dynamics of parasite-host systems and, in particular, to explain the importance of mutation in the evolution of parasite virulence.

  16. Evaluation of Model Fit in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Hu, Jinxiang; Miller, M. David; Huggins-Manley, Anne Corinne; Chen, Yi-Hsin

    2016-01-01

    Cognitive diagnosis models (CDMs) estimate student ability profiles using latent attributes. Model fit to the data needs to be ascertained in order to determine whether inferences from CDMs are valid. This study investigated the usefulness of some popular model fit statistics to detect CDM fit including relative fit indices (AIC, BIC, and CAIC),…

  17. Radial Domany-Kinzel models with mutation and selection

    NASA Astrophysics Data System (ADS)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  18. Testing a Model of Employee Selection: A Contextual Approach

    ERIC Educational Resources Information Center

    Harada, Kiyoe; Bowman, Jeffry S.

    2004-01-01

    The study examined selection practices applied to education. The selected contextual factors were tested to see whether school administrators took consideration of person-organization fit (POF) factors when they select applicants during the selection process. The results showed that POF factors affected selection when school size was under…

  19. Percolation model for selective dissolution of multi-component glasses

    SciTech Connect

    Kale, R.P.; Brinker, C.J.

    1995-03-01

    A percolation model is developed which accounts for most known features of the process of porous glass membrane preparation by selective dissolution of multi-component glasses. The model is founded within tile framework of the classical percolation theory, wherein the components of a glass are represented by random sites on a suitable lattice. Computer simulation is used to mirror the generation of a porous structure during the dissolution process, reproducing many of the features associated with the phenomenon. Simulation results evaluate the effect of the initial composition of the glass on the kinetics of the leaching process as well as the morphology of the generated porous structure. The percolation model establishes the porous structure as a percolating cluster of unreachable constituents in the glass. The simulation algorithm incorporates removal of both, the accessible leachable components in the glass as well as the independent clusters of unreachable components not attached to the percolating cluster. The dissolution process thus becomes limited by the conventional site percolation thresholds of the unreachable components (which restricts the formation of the porous network), as well as the leachable components (which restricts the accessibility of the solvating medium into the glass). The simulation results delineate the range of compositional variations for successful porous glass preparation and predict the variation of porosity, surface area, dissolution rates and effluent composition with initial composition and time. Results compared well with experimental studies and improved upon similar models attempted in die past.

  20. A Model for Selection of Eyespots on Butterfly Wings

    PubMed Central

    Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida

    2015-01-01

    Unsolved Problem The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. Key Idea and Model We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus

  1. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  2. Multiphysics modeling of selective laser sintering/melting

    NASA Astrophysics Data System (ADS)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  3. Hyperopt: a Python library for model selection and hyperparameter optimization

    NASA Astrophysics Data System (ADS)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  4. Feature selection for facial expression recognition using deformation modeling

    NASA Astrophysics Data System (ADS)

    Srivastava, Ruchir; Sim, Terence; Yan, Shuicheng; Ranganath, Surendra

    2010-02-01

    Works on Facial Expression Recognition (FER) have mostly been done using image based approaches. However, in recent years, researchers have also been trying to explore the use of 3D information for the task of FER. Most of the time, there is a need for having a neutral (expressionless) face of the subject in both the image based and 3D model based approaches. However, this might not be practical in many applications. This paper tries to address this limitations in previous works by proposing a novel technique of feature extraction which does not require any neutral face of the subjects. It has been proposed and validated experimentally that the motion of some landmark points on the face, in exhibiting a particular facial expression, is similar in different persons. Separate classifier is made and relevant feature points are selected for each expression. One vs all SVM classification gives promising results.

  5. Model catalysis by size-selected cluster deposition

    SciTech Connect

    Anderson, Scott

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  6. [Model of the selective calcium channel of characean algae].

    PubMed

    Lunevskiĭ, V Z; Zherelova, O M; Aleksandrov, A A; Vinokurov, M G; Berestovskiĭ, G N

    1980-01-01

    The present work was intended to further investigate the selective filter of calcium channel on both a cell membrane and reconstructed channels. For the studies on cell membranes, an inhibitor of chloride channels was chosen (ethacrynic acid) to pass currents only through the calcium channels. On both the cells and reconstructed channels, permeability of ions of different crystal radii and valencies was investigated. The obtained results suggest that the channel represents a wide water pore with a diameter larger than 8 A into which ions go together with the nearest water shell. The values of the maximal currents are given by electrostatic interaction of the ions with the anion center of the channel. A phenomenological two-barrier model of the channel is given which describes the movement of all the ions studied. PMID:6251921

  7. ModelMage: a tool for automatic model generation, selection and management.

    PubMed

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software. PMID:19425122

  8. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    SciTech Connect

    Sukumar, Sreenivas R; Nutaro, James J

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.

  9. Evaluating experimental design for soil-plant model selection with Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian

    2013-04-01

    The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), the model weights in BMA are perceived as uncertain quantities with assigned probability distributions that narrow down as more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. The models were then conditioned on field measurements of soil moisture, leaf-area index (LAI), and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at the Nellingen site in Southwestern Germany. Following our new method, we derived the BMA model weights (and their distributions) when using all data or different subsets thereof. We discuss to which degree the posterior BMA mean outperformed the prior BMA

  10. Graphical LASSO based Model Selection for Time Series

    NASA Astrophysics Data System (ADS)

    Jung, Alexander; Hannak, Gabor; Goertz, Norbert

    2015-10-01

    We propose a novel graphical model selection (GMS) scheme for high-dimensional stationary time series or discrete time process. The method is based on a natural generalization of the graphical LASSO (gLASSO), introduced originally for GMS based on i.i.d. samples, and estimates the conditional independence graph (CIG) of a time series from a finite length observation. The gLASSO for time series is defined as the solution of an l1-regularized maximum (approximate) likelihood problem. We solve this optimization problem using the alternating direction method of multipliers (ADMM). Our approach is nonparametric as we do not assume a finite dimensional (e.g., an autoregressive) parametric model for the observed process. Instead, we require the process to be sufficiently smooth in the spectral domain. For Gaussian processes, we characterize the performance of our method theoretically by deriving an upper bound on the probability that our algorithm fails to correctly identify the CIG. Numerical experiments demonstrate the ability of our method to recover the correct CIG from a limited amount of samples.

  11. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    NASA Astrophysics Data System (ADS)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  12. Model-based fault detection and identification with online aerodynamic model structure selection

    NASA Astrophysics Data System (ADS)

    Lombaerts, T.

    2013-12-01

    This publication describes a recursive algorithm for the approximation of time-varying nonlinear aerodynamic models by means of a joint adaptive selection of the model structure and parameter estimation. This procedure is called adaptive recursive orthogonal least squares (AROLS) and is an extension and modification of the previously developed ROLS procedure. This algorithm is particularly useful for model-based fault detection and identification (FDI) of aerospace systems. After the failure, a completely new aerodynamic model can be elaborated recursively with respect to structure as well as parameter values. The performance of the identification algorithm is demonstrated on a simulation data set.

  13. Binocular rivalry waves in a directionally selective neural field model

    NASA Astrophysics Data System (ADS)

    Carroll, Samuel R.; Bressloff, Paul C.

    2014-10-01

    We extend a neural field model of binocular rivalry waves in the visual cortex to incorporate direction selectivity of moving stimuli. For each eye, we consider a one-dimensional network of neurons that respond maximally to a fixed orientation and speed of a grating stimulus. Recurrent connections within each one-dimensional network are taken to be excitatory and asymmetric, where the asymmetry captures the direction and speed of the moving stimuli. Connections between the two networks are taken to be inhibitory (cross-inhibition). As per previous studies, we incorporate slow adaption as a symmetry breaking mechanism that allows waves to propagate. We derive an analytical expression for traveling wave solutions of the neural field equations, as well as an implicit equation for the wave speed as a function of neurophysiological parameters, and analyze their stability. Most importantly, we show that propagation of traveling waves is faster in the direction of stimulus motion than against it, which is in agreement with previous experimental and computational studies.

  14. A Model and Heuristic for Solving Very Large Item Selection Problems.

    ERIC Educational Resources Information Center

    Swanson, Len; Stocking, Martha L.

    1993-01-01

    A model for solving very large item selection problems is presented. The model builds on binary programming applied to test construction. A heuristic for selecting items that satisfy the constraints in the model is also presented, and various problems are solved using the model and heuristic. (SLD)

  15. Bayesian model selection for a finite element model of a large civil aircraft

    SciTech Connect

    Hemez, F. M.; Rutherford, A. C.

    2004-01-01

    Nine aircraft stiffness parameters have been varied and used as inputs to a finite element model of an aircraft to generate natural frequency and deflection features (Goge, 2003). This data set (147 input parameter configurations and associated outputs) is now used to generate a metamodel, or a fast running surrogate model, using Bayesian model selection methods. Once a forward relationship is defined, the metamodel may be used in an inverse sense. That is, knowing the measured output frequencies and deflections, what were the input stiffness parameters that caused them?

  16. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  17. Principal Selection in Rural School Districts: A Process Model.

    ERIC Educational Resources Information Center

    Richardson, M. D.; And Others

    Recent research illustrates the increasingly important role of the school principal. As a result, procedures for selecting principals have also become more critical to rural school districts. School systems, particularly rural school districts, are encouraged to adopt systematic, rational means for selecting administrators. Such procedures will…

  18. A sensibility analysis of model selection in modeling the reactive transport of cesium in crushed granite.

    PubMed

    Cheng, Hwai-Ping; Li, Ming-Hsu; Li, Samuel

    2003-03-01

    We performed a sensibility analysis of model selection in modeling the reactive transport of cesium in crushed granite through model calibration and validation. Based on some solid phase analysis data and kinetic batch experimental results, we hypothesized three two-site sorption models in the LEHGC reactive transport model to fit the breakthrough curves (BTCs) from the corresponding column experiments. The analysis of breakthrough curves shows that both the empirical two-site kinetic linear sorption model and the semi-mechanistic/semi-empirical two-site kinetic surface complexation model, regardless of their complexity, can match our experimental data fairly well under given test conditions. A numerical experiment to further compare the two models shows that they behave differently when the pore velocity is not of the same order of magnitude as our test velocities. This result indicates that further investigations to help determine a better model are needed. We suggest that a multistage column experiment, which tests over the whole range of practical flow velocities, should be conducted to help alleviate inadequate hypothesized models.

  19. Feature selection for physics model based object discrimination

    NASA Astrophysics Data System (ADS)

    Wang, Chunmei; Collins, Leslie

    2005-06-01

    We investigated the application of two state-of-the-art feature selection algorithms for subsurface target discrimination. One is called joint classification and feature optimization (JCFO), which imposes a sparse prior on the features, and optimizes the classifier and its predictors simultaneously via an expectation maximization (EM) algorithm. The other selects features by directly maximizing the hypothesis margin between targets and clutter. The results of feature selection and target discrimination are demonstrated using wideband electromagnetic induction data measured at data collected at the Aberdeen Proving Ground Standardized Test Site for UXO discrimination. It is shown that the classification performance is significantly improved by only including a compact set of relevant features.

  20. Exploratory subgroup analysis in clinical trials by model selection.

    PubMed

    Rosenkranz, Gerd K

    2016-09-01

    The interest in individualized medicines and upcoming or renewed regulatory requests to assess treatment effects in subgroups of confirmatory trials requires statistical methods that account for selection uncertainty and selection bias after having performed the search for meaningful subgroups. The challenge is to judge the strength of the apparent findings after mining the same data to discover them. In this paper, we describe a resampling approach that allows to replicate the subgroup finding process many times. The replicates are used to adjust the effect estimates for selection bias and to provide variance estimators that account for selection uncertainty. A simulation study provides some evidence of the performance of the method and an example from oncology illustrates its use. PMID:27230820

  1. Using Wherry's Adjusted R Squared and Mallow's C (p) for Model Selection from All Possible Regressions.

    ERIC Educational Resources Information Center

    Olejnik, Stephen; Mills, Jamie; Keselman, Harvey

    2000-01-01

    Evaluated the use of Mallow's C(p) and Wherry's adjusted R squared (R. Wherry, 1931) statistics to select a final model from a pool of model solutions using computer generated data. Neither statistic identified the underlying regression model any better than, and usually less well than, the stepwise selection method, which itself was poor for…

  2. Bayesian model selection of template forward models for EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction.

  3. Stimulus design for model selection and validation in cell signaling.

    PubMed

    Apgar, Joshua F; Toettcher, Jared E; Endy, Drew; White, Forest M; Tidor, Bruce

    2008-02-01

    Mechanism-based chemical kinetic models are increasingly being used to describe biological signaling. Such models serve to encapsulate current understanding of pathways and to enable insight into complex biological processes. One challenge in model development is that, with limited experimental data, multiple models can be consistent with known mechanisms and existing data. Here, we address the problem of model ambiguity by providing a method for designing dynamic stimuli that, in stimulus-response experiments, distinguish among parameterized models with different topologies, i.e., reaction mechanisms, in which only some of the species can be measured. We develop the approach by presenting two formulations of a model-based controller that is used to design the dynamic stimulus. In both formulations, an input signal is designed for each candidate model and parameterization so as to drive the model outputs through a target trajectory. The quality of a model is then assessed by the ability of the corresponding controller, informed by that model, to drive the experimental system. We evaluated our method on models of antibody-ligand binding, mitogen-activated protein kinase (MAPK) phosphorylation and de-phosphorylation, and larger models of the epidermal growth factor receptor (EGFR) pathway. For each of these systems, the controller informed by the correct model is the most successful at designing a stimulus to produce the desired behavior. Using these stimuli we were able to distinguish between models with subtle mechanistic differences or where input and outputs were multiple reactions removed from the model differences. An advantage of this method of model discrimination is that it does not require novel reagents, or altered measurement techniques; the only change to the experiment is the time course of stimulation. Taken together, these results provide a strong basis for using designed input stimuli as a tool for the development of cell signaling models. PMID

  4. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    PubMed

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis.

  5. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  6. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    NASA Astrophysics Data System (ADS)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  7. Selected comments on the ORNL Residential Energy-Use Model

    SciTech Connect

    Herbert, J.H.

    1980-06-01

    This report assesses critical technical aspects of the Oak Ridge National Laboratory (ORNL) Residential Energy Use Model. An important component of the ORNL Model is determination of the thermal performance of new equipment or structures. The examples presented here are illustrative of the type of analytic problems discovered in a detailed assessment of the model. A list of references is appended.

  8. Selection of Authentic Modelling Practices as Contexts for Chemistry Education

    ERIC Educational Resources Information Center

    Prins, Gjalt T.; Bulte, Astrid M. W.; van Driel, Jan H.; Pilot, Albert

    2008-01-01

    In science education, students should come to understand the nature and significance of models. In the case of chemistry education it is argued that the present use of models is often not meaningful from the students' perspective. A strategy to overcome this problem is to use an authentic chemical modelling practice as a context for a curriculum…

  9. Support interference of wind tunnel models: A selective annotated bibliography

    NASA Technical Reports Server (NTRS)

    Tuttle, M. H.; Gloss, B. B.

    1981-01-01

    This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.

  10. A Four-Step Model for Teaching Selection Interviewing Skills

    ERIC Educational Resources Information Center

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  11. Support interference of wind tunnel models: A selective annotated bibliography

    NASA Technical Reports Server (NTRS)

    Tuttle, M. H.; Lawing, P. L.

    1984-01-01

    This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.

  12. Young Children's Selective Learning of Rule Games from Reliable and Unreliable Models

    ERIC Educational Resources Information Center

    Rakoczy, Hannes; Warneken, Felix; Tomasello, Michael

    2009-01-01

    We investigated preschoolers' selective learning from models that had previously appeared to be reliable or unreliable. Replicating previous research, children from 4 years selectively learned novel words from reliable over unreliable speakers. Extending previous research, children also selectively learned other kinds of acts--novel games--from…

  13. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection

    PubMed Central

    Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-01-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  14. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    PubMed

    Read, Mark N; Bailey, Jacqueline; Timmis, Jon; Chtanova, Tatyana

    2016-09-01

    The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs) against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto fronts of optimal

  15. Fuel model selection for BEHAVE in midwestern oak savannas

    USGS Publications Warehouse

    Grabner, K.W.; Dwyer, J.P.; Cutter, B.E.

    2001-01-01

    BEHAVE, a fire behavior prediction system, can be a useful tool for managing areas with prescribed fire. However, the proper choice of fuel models can be critical in developing management scenarios. BEHAVE predictions were evaluated using four standardized fuel models that partially described oak savanna fuel conditions: Fuel Model 1 (Short Grass), 2 (Timber and Grass), 3 (Tall Grass), and 9 (Hardwood Litter). Although all four models yielded regressions with R2 in excess of 0.8, Fuel Model 2 produced the most reliable fire behavior predictions.

  16. Accurate characterization of delay discounting: a multiple model approach using approximate Bayesian model selection and a unified discounting measure.

    PubMed

    Franck, Christopher T; Koffarnus, Mikhail N; House, Leanna L; Bickel, Warren K

    2015-01-01

    The study of delay discounting, or valuation of future rewards as a function of delay, has contributed to understanding the behavioral economics of addiction. Accurate characterization of discounting can be furthered by statistical model selection given that many functions have been proposed to measure future valuation of rewards. The present study provides a convenient Bayesian model selection algorithm that selects the most probable discounting model among a set of candidate models chosen by the researcher. The approach assigns the most probable model for each individual subject. Importantly, effective delay 50 (ED50) functions as a suitable unifying measure that is computable for and comparable between a number of popular functions, including both one- and two-parameter models. The combined model selection/ED50 approach is illustrated using empirical discounting data collected from a sample of 111 undergraduate students with models proposed by Laibson (1997); Mazur (1987); Myerson & Green (1995); Rachlin (2006); and Samuelson (1937). Computer simulation suggests that the proposed Bayesian model selection approach outperforms the single model approach when data truly arise from multiple models. When a single model underlies all participant data, the simulation suggests that the proposed approach fares no worse than the single model approach.

  17. Optimal selection of Orbital Replacement Unit on-orbit spares - A Space Station system availability model

    NASA Technical Reports Server (NTRS)

    Schwaab, Douglas G.

    1991-01-01

    A mathematical programing model is presented to optimize the selection of Orbital Replacement Unit on-orbit spares for the Space Station. The model maximizes system availability under the constraints of logistics resupply-cargo weight and volume allocations.

  18. Amine modeling for CO2 capture: internals selection.

    PubMed

    Karpe, Prakash; Aichele, Clint P

    2013-04-16

    Traditionally, trays have been the mass-transfer device of choice in amine absorption units. However, the need to process large volumes of flue gas to capture CO2 and the resultant high costs of multiple trains of large trayed columns have prompted process licensors and vendors to investigate alternative mass-transfer devices. These alternatives include third-generation random packings and structured packings. Nevertheless, clear-cut guidelines for selection of packings for amine units are lacking. This paper provides well-defined guidelines and a consistent framework for the choice of mass-transfer devices for amine absorbers and regenerators. This work emphasizes the role played by the flow parameter, a measure of column liquid loading and pressure, in the type of packing selected. In addition, this paper demonstrates the significant economic advantage of packings over trays in terms of capital costs (CAPEX) and operating costs (OPEX).

  19. Modelling the nucleation and chirality selection of carbon nanotubes.

    PubMed

    Li, L; Reich, S; Robertson, J

    2006-05-01

    The selection of chiralities of single-walled carbon nanotubes is one of the key problems of nanotube science. We suggest that the chirality-selective growth of SWNTs could be achieved using chemical vapour deposition (CVD) by controlling the type of caps that form during the nucleation stage. As the catalyst can be solid during CVD, the formation of particular caps may be favoured by an epitaxial relationship to the catalyst surface. The corresponding tubes would then grow preferentially. We show by ab-initio calculations that the formation energy of some lattice-matched caps and tubes are 1-2 eV lower than the non lattice-matched structures.

  20. Diagnosing Hybrid Systems: a Bayesian Model Selection Approach

    NASA Technical Reports Server (NTRS)

    McIlraith, Sheila A.

    2005-01-01

    In this paper we examine the problem of monitoring and diagnosing noisy complex dynamical systems that are modeled as hybrid systems-models of continuous behavior, interleaved by discrete transitions. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. Building on our previous work in this area (MBCG99;MBCG00), our specific focus in this paper ins on the mathematical formulation of the hybrid monitoring and diagnosis task as a Bayesian model tracking algorithm. The nonlinear dynamics of many hybrid systems present challenges to probabilistic tracking. Further, probabilistic tracking of a system for the purposes of diagnosis is problematic because the models of the system corresponding to failure modes are numerous and generally very unlikely. To focus tracking on these unlikely models and to reduce the number of potential models under consideration, we exploit logic-based techniques for qualitative model-based diagnosis to conjecture a limited initial set of consistent candidate models. In this paper we discuss alternative tracking techniques that are relevant to different classes of hybrid systems, focusing specifically on a method for tracking multiple models of nonlinear behavior simultaneously using factored sampling and conditional density propagation. To illustrate and motivate the approach described in this paper we examine the problem of monitoring and diganosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.

  1. NEW MDS AND CLUSTERING BASED ALGORITHMS FOR PROTEIN MODEL QUALITY ASSESSMENT AND SELECTION.

    PubMed

    Wang, Qingguo; Shang, Charles; Xu, Dong; Shang, Yi

    2013-10-25

    In protein tertiary structure prediction, assessing the quality of predicted models is an essential task. Over the past years, many methods have been proposed for the protein model quality assessment (QA) and selection problem. Despite significant advances, the discerning power of current methods is still unsatisfactory. In this paper, we propose two new algorithms, CC-Select and MDS-QA, based on multidimensional scaling and k-means clustering. For the model selection problem, CC-Select combines consensus with clustering techniques to select the best models from a given pool. Given a set of predicted models, CC-Select first calculates a consensus score for each structure based on its average pairwise structural similarity to other models. Then, similar structures are grouped into clusters using multidimensional scaling and clustering algorithms. In each cluster, the one with the highest consensus score is selected as a candidate model. For the QA problem, MDS-QA combines single-model scoring functions with consensus to determine more accurate assessment score for every model in a given pool. Using extensive benchmark sets of a large collection of predicted models, we compare the two algorithms with existing state-of-the-art quality assessment methods and show significant improvement. PMID:24808625

  2. NEW MDS AND CLUSTERING BASED ALGORITHMS FOR PROTEIN MODEL QUALITY ASSESSMENT AND SELECTION

    PubMed Central

    WANG, QINGGUO; SHANG, CHARLES; XU, DONG

    2014-01-01

    In protein tertiary structure prediction, assessing the quality of predicted models is an essential task. Over the past years, many methods have been proposed for the protein model quality assessment (QA) and selection problem. Despite significant advances, the discerning power of current methods is still unsatisfactory. In this paper, we propose two new algorithms, CC-Select and MDS-QA, based on multidimensional scaling and k-means clustering. For the model selection problem, CC-Select combines consensus with clustering techniques to select the best models from a given pool. Given a set of predicted models, CC-Select first calculates a consensus score for each structure based on its average pairwise structural similarity to other models. Then, similar structures are grouped into clusters using multidimensional scaling and clustering algorithms. In each cluster, the one with the highest consensus score is selected as a candidate model. For the QA problem, MDS-QA combines single-model scoring functions with consensus to determine more accurate assessment score for every model in a given pool. Using extensive benchmark sets of a large collection of predicted models, we compare the two algorithms with existing state-of-the-art quality assessment methods and show significant improvement. PMID:24808625

  3. A Journal Selection Model and its Implications for a Library System

    ERIC Educational Resources Information Center

    Kraft, D. H.; Hill, T. W., Jr.

    1973-01-01

    The problem of selecting which journals to acquire in order to best satisfy library objectives is modeled as a zero-one linear programming problem and examined in detail. The model can be used to aid the librarian in making better selection decisions. (30 references) (Author/KE)

  4. Application of Model-Selection Criteria to Some Problems in Multivariate Analysis.

    ERIC Educational Resources Information Center

    Sclove, Stanley L.

    1987-01-01

    A review of model-selection criteria is presented, suggesting their similarities. Some problems treated by hypothesis tests may be more expeditiously treated by the application of model-selection criteria. Multivariate analysis, cluster analysis, and factor analysis are considered. (Author/GDC)

  5. Fantasy-Testing-Assessment: A Proposed Model for the Investigation of Mate Selection.

    ERIC Educational Resources Information Center

    Nofz, Michael P.

    1984-01-01

    Proposes a model for mate selection which outlines three modes of interpersonal relating--fantasy, testing, and assessment (FTA). The model is viewed as a more accurate representation of mate selection processes than suggested by earlier theories, and can be used to clarify couples' understandings of their own relationships. (JAC)

  6. Physics-based statistical learning approach to mesoscopic model selection.

    PubMed

    Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  7. Physics-based statistical learning approach to mesoscopic model selection

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  8. Default Bayes Factors for Model Selection in Regression

    ERIC Educational Resources Information Center

    Rouder, Jeffrey N.; Morey, Richard D.

    2012-01-01

    In this article, we present a Bayes factor solution for inference in multiple regression. Bayes factors are principled measures of the relative evidence from data for various models or positions, including models that embed null hypotheses. In this regard, they may be used to state positive evidence for a lack of an effect, which is not possible…

  9. Beyond the List: Schools Selecting Alternative CSR Models.

    ERIC Educational Resources Information Center

    Clark, Gail; Apthorp, Helen; Van Buhler, Rebecca; Dean, Ceri; Barley, Zoe

    A study was conducted to describe the population of alternative models for comprehensive school reform in the region served by Mid-continent Research for Education and Learning (McREL). The study addressed the questions of whether schools that did not propose to adopt widely known or implemented reform models were able to design a reform process…

  10. Computational approaches to parameter estimation and model selection in immunology

    NASA Astrophysics Data System (ADS)

    Baker, C. T. H.; Bocharov, G. A.; Ford, J. M.; Lumb, P. M.; Norton, S. J.; Paul, C. A. H.; Junt, T.; Krebs, P.; Ludewig, B.

    2005-12-01

    One of the significant challenges in biomathematics (and other areas of science) is to formulate meaningful mathematical models. Our problem is to decide on a parametrized model which is, in some sense, most likely to represent the information in a set of observed data. In this paper, we illustrate the computational implementation of an information-theoretic approach (associated with a maximum likelihood treatment) to modelling in immunology.The approach is illustrated by modelling LCMV infection using a family of models based on systems of ordinary differential and delay differential equations. The models (which use parameters that have a scientific interpretation) are chosen to fit data arising from experimental studies of virus-cytotoxic T lymphocyte kinetics; the parametrized models that result are arranged in a hierarchy by the computation of Akaike indices. The practical illustration is used to convey more general insight. Because the mathematical equations that comprise the models are solved numerically, the accuracy in the computation has a bearing on the outcome, and we address this and other practical details in our discussion.

  11. Model selection, identification and validation in anaerobic digestion: a review.

    PubMed

    Donoso-Bravo, Andres; Mailier, Johan; Martin, Cristina; Rodríguez, Jorge; Aceves-Lara, César Arturo; Vande Wouwer, Alain

    2011-11-01

    Anaerobic digestion enables waste (water) treatment and energy production in the form of biogas. The successful implementation of this process has lead to an increasing interest worldwide. However, anaerobic digestion is a complex biological process, where hundreds of microbial populations are involved, and whose start-up and operation are delicate issues. In order to better understand the process dynamics and to optimize the operating conditions, the availability of dynamic models is of paramount importance. Such models have to be inferred from prior knowledge and experimental data collected from real plants. Modeling and parameter identification are vast subjects, offering a realm of approaches and methods, which can be difficult to fully understand by scientists and engineers dedicated to the plant operation and improvements. This review article discusses existing modeling frameworks and methodologies for parameter estimation and model validation in the field of anaerobic digestion processes. The point of view is pragmatic, intentionally focusing on simple but efficient methods. PMID:21920578

  12. Demographic modeling of selected fish species with RAMAS

    SciTech Connect

    Saila, S.; Martin, B.; Ferson, S.; Ginzburg, L.; Millstein, J. )

    1991-03-01

    The microcomputer program RAMAS 3 developed for EPRI, has been used to model the intrinsic natural variability of seven important fish species: cod, Atlantic herring, yellowtail flounder, haddock, striped bass, American shad and white perch. Demographic data used to construct age-based population models included information on spawning biology, longevity, sex ratio and (age-specific) mortality and fecundity. These data were collected from published and unpublished sources. The natural risks of extinction and of falling below threshold population abundances (quasi-extinction) are derived for each of the seven fish species based on measured and estimated values for their demographic parameters. The analysis of these species provides evidence that including density-dependent compensation in the demographic model typically lowers the expected chance of extinction. This is because if density dependence generally acts as a restoring force it seems reasonable to conclude that models which include density dependence would exhibit less fluctuation than models without compensation since density-dependent populations experience a pull towards equilibrium. Since extinction probabilities are determined by the size of the fluctuation of population abundance, models without density dependence will show higher risks of extinction, given identical circumstances. Thus, models without compensation can be used as conservative estimators of risk, that is, if a compensation-free model yields acceptable extinction risk, adding compensation will not increase this risk. Since it is usually difficult to estimate the parameters needed for a model with compensation, such conservative estimates of the risks of extinction based on a model without compensation are very useful in the methodology of impact assessment. 103 refs., 19 figs., 10 tabs.

  13. Selective Recovery From Failures In A Task Parallel Programming Model

    SciTech Connect

    Dinan, James S.; Singri, Arjun; Sadayappan, Ponnuswamy; Krishnamoorthy, Sriram

    2010-05-17

    We present a fault tolerant task pool execution environment that is capable of performing fine-grain selective restart using a lightweight, distributed task completion tracking mechanism. Compared with conventional checkpoint/restart techniques, this system offers a recovery penalty that is proportional to the degree of failure rather than the system size. We evaluate this system using the Self Consistent Field (SCF) kernel which forms an important component in ab initio methods for computational chemistry. Experimental results indicate that fault tolerant task pools are robust in the presence of an arbitrary number of failures and that they offer low overhead in the absence of faults.

  14. Model selection for athermal cross-linked fiber networks.

    PubMed

    Shahsavari, A; Picu, R C

    2012-07-01

    Athermal random fiber networks are usually modeled by representing each fiber as a truss, a Euler-Bernoulli or a Timoshenko beam, and, in the case of cross-linked networks, each cross-link as a pinned, rotating, or welded joint. In this work we study the effect of these various modeling options on the dependence of the overall network stiffness on system parameters. We conclude that Timoshenko beams can be used for the entire range of density and beam stiffness parameters, while the Euler-Bernoulli model can be used only at relatively low network densities. In the high density-high bending stiffness range, strain energy is stored predominantly in the axial and shear deformation modes, while in the other extreme range of parameters, the energy is stored in the bending mode. The effect of the model size on the network stiffness is also discussed. PMID:23005468

  15. Model atmosphere analysis of selected luminous B stars

    NASA Technical Reports Server (NTRS)

    Fitzpatrick, Edward L.; MASSA; WALGREN

    1994-01-01

    The general scientific goal of this program has been to determine whether the atmospheric structure of the B-type stars can be represented by the current generation of plane parallel, line-blanketed, LTE stellar atmosphere models sufficiently well to allow accurate effective temperatures and surface gravities to be deduced. The B stars cover a wide range of temperature and luminosity. For the hottest such stars (with T approximately 30,000 K) the applicability of the models may be compromised by departures from LTE in the stellar atmospheres ('non-LTE effects'). At the highest luminosities (the B 'super giants'), the models may be invalidated by departures from plane parallel geometry. Thus we seek to identify the temperature and luminosity range within which these effects are unimportant and where the models may be relied upon.

  16. Cultural competence models in nursing: a selected annotated bibliography.

    PubMed

    Shen, Zuwang

    2004-10-01

    Since the early 1990s, along with a phenomenal growth of nursing literature published on cultural competence, an array of cultural competence and cultural assessment models has been developed. This annotated bibliography provides bibliographic entries to books, book chapters, and journal articles that deal with the construction, development, or conceptualization of cultural competence and cultural assessment models. It also includes entries to books dealing with cultural assessment guides.

  17. Selecting a model for detecting the presence of a trend

    SciTech Connect

    Woodward, W.A.; Gray, H.L.

    1995-08-01

    The authors consider the problem of determining whether the upward trending behavior in the global temperature anomaly series should be forecast to continue. To address this question, the generic problem of determining whether an observed trend in a time series realization is a random (i.e., short-term) trend or a deterministic (i.e., permanent) trend is considered. The importance of making this determination is that forecasts based on these two scenarios are dramatically different. Forecasts based on a series with random trends will not predict the observed trend to continue, while forecasts based on a model with deterministic trend will forecast the trend to continue into the future. In this paper, the authors consider an autoregressive integrated moving average (ARIMA) model and a {open_quotes}deterministic forcing function + autoregressive (AR) noise{close_quotes} model as possible random trend and deterministic trend models, respectively, for realizations displaying trending behavior. A bootstrap-based classification procedure for classifying an observed time series realization as ARIMA or {open_quotes}function + AR{close_quotes} using linear and quadratic forcing functions is introduced. A simulation study demonstrates that the procedure is useful in distinguishing between realizations from these two models. A unit-root test is also examined in an effort to distinguish between these two types of models. Using the techniques developed here, the temperature anomaly series are classified as ARIMA (i.e., having random trends). 18 refs., 1 fig., 8 tabs.

  18. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  19. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness.

  20. A simple model of group selection that cannot be analyzed with inclusive fitness.

    PubMed

    van Veelen, Matthijs; Luo, Shishi; Simon, Burton

    2014-11-01

    A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models, we show two distinct limitations that prevent recasting in terms of inclusive fitness. The first is a limitation across models. We show that if inclusive fitness is to always give the correct prediction, the definition of relatedness needs to change, continuously, along with changes in the parameters of the model. This results in infinitely many different definitions of relatedness - one for every parameter value - which strips relatedness of its meaning. The second limitation is across time. We show that one can find the trajectory for the group selection model by solving a partial differential equation, and that it is mathematically impossible to do this using inclusive fitness. PMID:25034338

  1. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  2. Turbulence Model Selection for Low Reynolds Number Flows.

    PubMed

    Aftab, S M A; Mohd Rafie, A S; Razak, N A; Ahmad, K A

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil's surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail. PMID:27104354

  3. Turbulence Model Selection for Low Reynolds Number Flows.

    PubMed

    Aftab, S M A; Mohd Rafie, A S; Razak, N A; Ahmad, K A

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil's surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail.

  4. Catalog of selected heavy duty transport energy management models

    NASA Technical Reports Server (NTRS)

    Colello, R. G.; Boghani, A. B.; Gardella, N. C.; Gott, P. G.; Lee, W. D.; Pollak, E. C.; Teagan, W. P.; Thomas, R. G.; Snyder, C. M.; Wilson, R. P., Jr.

    1983-01-01

    A catalog of energy management models for heavy duty transport systems powered by diesel engines is presented. The catalog results from a literature survey, supplemented by telephone interviews and mailed questionnaires to discover the major computer models currently used in the transportation industry in the following categories: heavy duty transport systems, which consist of highway (vehicle simulation), marine (ship simulation), rail (locomotive simulation), and pipeline (pumping station simulation); and heavy duty diesel engines, which involve models that match the intake/exhaust system to the engine, fuel efficiency, emissions, combustion chamber shape, fuel injection system, heat transfer, intake/exhaust system, operating performance, and waste heat utilization devices, i.e., turbocharger, bottoming cycle.

  5. Turbulence Model Selection for Low Reynolds Number Flows

    PubMed Central

    2016-01-01

    One of the major flow phenomena associated with low Reynolds number flow is the formation of separation bubbles on an airfoil’s surface. NACA4415 airfoil is commonly used in wind turbines and UAV applications. The stall characteristics are gradual compared to thin airfoils. The primary criterion set for this work is the capture of laminar separation bubble. Flow is simulated for a Reynolds number of 120,000. The numerical analysis carried out shows the advantages and disadvantages of a few turbulence models. The turbulence models tested were: one equation Spallart Allmars (S-A), two equation SST K-ω, three equation Intermittency (γ) SST, k-kl-ω and finally, the four equation transition γ-Reθ SST. However, the variation in flow physics differs between these turbulence models. Procedure to establish the accuracy of the simulation, in accord with previous experimental results, has been discussed in detail. PMID:27104354

  6. Driver Missense Mutation Identification Using Feature Selection and Model Fusion.

    PubMed

    Soliman, Ahmed T; Meng, Tao; Chen, Shu-Ching; Iyengar, S S; Iyengar, Puneeth; Yordy, John; Shyu, Mei-Ling

    2015-12-01

    Driver mutations propel oncogenesis and occur much less frequently than passenger mutations. The need for automatic and accurate identification of driver mutations has increased dramatically with the exponential growth of mutation data. Current computational solutions to identify driver mutations rely on sequence homology. Here we construct a machine learning-based framework that does not rely on sequence homology or domain knowledge to predict driver missense mutations. A windowing approach to represent the local environment of the sequence around the mutation point as a mutation sample is applied, followed by extraction of three sequence-level features from each sample. After selecting the most significant features, the support vector machine and multimodal fusion strategies are employed to give final predictions. The proposed framework achieves relatively high performance and outperforms current state-of-the-art algorithms. The ease of deploying the proposed framework and the relatively accurate performance make this solution applicable to large-scale mutation data analyses. PMID:26402258

  7. SUPERCRITICAL WATER OXIDATION MODEL DEVELOPMENT FOR SELECTED EPA PRIORITY POLLUTANTS

    EPA Science Inventory

    Supercritical Water Oxidation (SCWO) evaluated for five compounds: acetic acid, 2,4-dichlorophenol, pentachlorophenol, pyridine, 2,4-dichlorophenoxyacetic acid (methyl ester). inetic models were developed for acetic acid, 2,4-dichlorophenol, and pyridine. he test compounds were e...

  8. A Data Envelopment Analysis Model for Renewable Energy Technology Selection

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Public and media interest in alternative energy sources, such as renewable fuels, has rapidly increased in recent years due to higher prices for oil and natural gas. However, the current body of research providing comparative decision making models that either rank these alternative energy sources a...

  9. Selected Models and Elements of Evaluation for Vocational Educators.

    ERIC Educational Resources Information Center

    Orlich, Donald C.; Murphy, Ronald R.

    The purpose of this manual is to provide vocational educators with evaluation elements and tested models which can assist them in designing evaluation systems. Chapter 1 provides several sets of criteria for inclusion in any general program evaluation. The eleven general areas for which criteria are included are administrative procedures,…

  10. Factor selection and structural identification in the interaction ANOVA model.

    PubMed

    Post, Justin B; Bondell, Howard D

    2013-03-01

    When faced with categorical predictors and a continuous response, the objective of an analysis often consists of two tasks: finding which factors are important and determining which levels of the factors differ significantly from one another. Often times, these tasks are done separately using Analysis of Variance (ANOVA) followed by a post hoc hypothesis testing procedure such as Tukey's Honestly Significant Difference test. When interactions between factors are included in the model the collapsing of levels of a factor becomes a more difficult problem. When testing for differences between two levels of a factor, claiming no difference would refer not only to equality of main effects, but also to equality of each interaction involving those levels. This structure between the main effects and interactions in a model is similar to the idea of heredity used in regression models. This article introduces a new method for accomplishing both of the common analysis tasks simultaneously in an interaction model while also adhering to the heredity-type constraint on the model. An appropriate penalization is constructed that encourages levels of factors to collapse and entire factors to be set to zero. It is shown that the procedure has the oracle property implying that asymptotically it performs as well as if the exact structure were known beforehand. We also discuss the application to estimating interactions in the unreplicated case. Simulation studies show the procedure outperforms post hoc hypothesis testing procedures as well as similar methods that do not include a structural constraint. The method is also illustrated using a real data example.

  11. Effect of Temporal Residual Correlation on Estimation of Model Averaging Weights

    NASA Astrophysics Data System (ADS)

    Ye, M.; Lu, D.; Curtis, G. P.; Meyer, P. D.; Yabusaki, S.

    2010-12-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are always calculated using model selection criteria such as AIC, AICc, BIC, and KIC. However, this method sometimes leads to an unrealistic situation in which one model receives overwhelmingly high averaging weight (even 100%), which cannot be justified by available data and knowledge. It is found in this study that the unrealistic situation is due partly, if not solely, to ignorance of residual correlation when estimating the negative log-likelihood function common to all the model selection criteria. In the context of maximum-likelihood or least-square inverse modeling, the residual correlation is accounted for in the full covariance matrix; when the full covariance matrix is replaced by its diagonal counterpart, it assumes data independence and ignores the correlation. As a result, treating the correlated residuals as independent distorts the distance between observations and simulations of alternative models. As a result, it may lead to incorrect estimation of model selection criteria and model averaging weights. This is illustrated for a set of surface complexation models developed to simulate uranium transport based on a series of column experiments. The residuals are correlated in time, and the time correlation is addressed using a second-order autoregressive model. The modeling results reveal importance of considering residual correlation in the estimation of model averaging weights.

  12. Sparse model selection in the highly under-sampled regime

    NASA Astrophysics Data System (ADS)

    Bulso, Nicola; Marsili, Matteo; Roudi, Yasser

    2016-09-01

    We propose a method for recovering the structure of a sparse undirected graphical model when very few samples are available. The method decides about the presence or absence of bonds between pairs of variable by considering one pair at a time and using a closed form formula, analytically derived by calculating the posterior probability for every possible model explaining a two body system using Jeffreys prior. The approach does not rely on the optimization of any cost functions and consequently is much faster than existing algorithms. Despite this time and computational advantage, numerical results show that for several sparse topologies the algorithm is comparable to the best existing algorithms, and is more accurate in the presence of hidden variables. We apply this approach to the analysis of US stock market data and to neural data, in order to show its efficiency in recovering robust statistical dependencies in real data with non-stationary correlations in time and/or space.

  13. Behavior changes in SIS STD models with selective mixing

    SciTech Connect

    Hyman, J.M.; Li, J.

    1997-08-01

    The authors propose and analyze a heterogeneous, multigroup, susceptible-infective-susceptible (SIS) sexually transmitted disease (STD) model where the desirability and acceptability in partnership formations are functions of the infected individuals. They derive explicit formulas for the epidemic thresholds, prove the existence and uniqueness of the equilibrium states for the two-group model and provide a complete analysis of their local and global stability. The authors then investigate the effects of behavior changes on the transmission dynamics and analyze the sensitivity of the epidemic to the magnitude of the behavior changes. They verify that if people modify their behavior to reduce the probability of infection with individuals in highly infected groups, through either reduced contacts, reduced partner formations, or using safe sex, the infection level may be decreased. However, if people continue to have intragroup and intergroup partnerships, then changing the desirability and acceptability formation cannot eradicate the epidemic once it exceeds the epidemic threshold.

  14. Cold dark matter isocurvature perturbations: Constraints and model selection

    SciTech Connect

    Sollom, Ian; Hobson, Michael P.; Challinor, Anthony

    2009-06-15

    We use cosmic microwave background radiation (WMAP and ACBAR), large-scale structure (SDSS luminous red galaxies), and supernova (SNLS) data to constrain the possible contribution of cold dark matter isocurvature modes to the primordial perturbation spectrum. We consider three different admixtures with adiabatic modes in a flat {lambda}CDM cosmology with no tensor modes: fixed correlations with a single spectral index; general correlations with a single spectral index; and general correlations with independent spectral indices for each mode. For fixed correlations, we verify the WMAP analysis for fully uncorrelated and anticorrelated modes, while for general correlations with a single index we find a small tightening of the constraint on the fractional contribution of isocurvature modes to the observed power over earlier work. For generally correlated modes and independent spectral indices our results are quite different to previous work, needing a doubling of prior space for the isocurvature spectral index in order to explore adequately the region of high likelihood. Standard Markov-Chain Monte Carlo techniques proved to be inadequate for this particular application; instead, our results are obtained with nested sampling. We also use the Bayesian evidence, calculated simply in the nested-sampling algorithm, to compare models, finding the pure adiabatic model to be favored over all our isocurvature models. This favoring is such that the logarithm of the Bayes factor, lnB<-2 for all models and lnB<-5 in the cases of fully anticorrelated modes with a single spectral index (the curvaton scenario) and generally correlated modes with a single spectral index.

  15. Parameter selection and testing the soil water model SOIL

    NASA Astrophysics Data System (ADS)

    McGechan, M. B.; Graham, R.; Vinten, A. J. A.; Douglas, J. T.; Hooda, P. S.

    1997-08-01

    The soil water and heat simulation model SOIL was tested for its suitability to study the processes of transport of water in soil. Required parameters, particularly soil hydraulic parameters, were determined by field and laboratory tests for some common soil types and for soils subjected to contrasting treatments of long-term grassland and tilled land under cereal crops. Outputs from simulations were shown to be in reasonable agreement with independently measured field drain outflows and soil water content histories.

  16. ICA model order selection of task co-activation networks

    PubMed Central

    Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.

    2013-01-01

    Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802

  17. A Belief-based Trust Model for Dynamic Service Selection

    NASA Astrophysics Data System (ADS)

    Ali, Ali Shaikh; Rana, Omer F.

    Provision of services across institutional boundaries has become an active research area. Many such services encode access to computational and data resources (comprising single machines to computational clusters). Such services can also be informational, and integrate different resources within an institution. Consequently, we envision a service rich environment in the future, where service consumers can intelligently decide between which services to select. If interaction between service providers/users is automated, it is necessary for these service clients to be able to automatically chose between a set of equivalent (or similar) services. In such a scenario trust serves as a benchmark to differentiate between service providers. One might therefore prioritize potential cooperative partners based on the established trust. Although many approaches exist in literature about trust between online communities, the exact nature of trust for multi-institutional service sharing remains undefined. Therefore, the concept of trust suffers from an imperfect understanding, a plethora of definitions, and informal use in the literature. We present a formalism for describing trust within multi-institutional service sharing, and provide an implementation of this; enabling the agent to make trust-based decision. We evaluate our formalism through simulation.

  18. On Numerical Aspects of Bayesian Model Selection in High and Ultrahigh-dimensional Settings

    PubMed Central

    Johnson, Valen E.

    2014-01-01

    This article examines the convergence properties of a Bayesian model selection procedure based on a non-local prior density in ultrahigh-dimensional settings. The performance of the model selection procedure is also compared to popular penalized likelihood methods. Coupling diagnostics are used to bound the total variation distance between iterates in an Markov chain Monte Carlo (MCMC) algorithm and the posterior distribution on the model space. In several simulation scenarios in which the number of observations exceeds 100, rapid convergence and high accuracy of the Bayesian procedure is demonstrated. Conversely, the coupling diagnostics are successful in diagnosing lack of convergence in several scenarios for which the number of observations is less than 100. The accuracy of the Bayesian model selection procedure in identifying high probability models is shown to be comparable to commonly used penalized likelihood methods, including extensions of smoothly clipped absolute deviations (SCAD) and least absolute shrinkage and selection operator (LASSO) procedures. PMID:24683431

  19. Selective Cooperation in Early Childhood - How to Choose Models and Partners.

    PubMed

    Hermes, Jonas; Behne, Tanya; Studte, Kristin; Zeyen, Anna-Maria; Gräfenhain, Maria; Rakoczy, Hannes

    2016-01-01

    Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models) on a rational basis. The present study investigated whether children (and adults) also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64) and adults (N = 14) saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects). Participants then performed different tasks (cooperative problem solving and word learning) requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks), but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks). Young children's selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks. PMID:27505043

  20. Selective Cooperation in Early Childhood – How to Choose Models and Partners

    PubMed Central

    Hermes, Jonas; Behne, Tanya; Studte, Kristin; Zeyen, Anna-Maria; Gräfenhain, Maria; Rakoczy, Hannes

    2016-01-01

    Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models) on a rational basis. The present study investigated whether children (and adults) also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64) and adults (N = 14) saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects). Participants then performed different tasks (cooperative problem solving and word learning) requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks), but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks). Young children’s selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks. PMID:27505043

  1. Optical modeling of black chrome solar-selective coatings

    SciTech Connect

    Sweet, J.N.; Pettit, R.B.

    1982-07-01

    Various investigations of coating microstructure are reviewed and the results of these studies are used to develop a picture of the microstructure of black chrome films plated from the Harshaw Chromonyx bath. In this model, the black chrome film is composed of roughly spherical particles which may tend to cluster together. These particles in turn are composed of small crystallites of metallic chrome and various oxides of chrome. The film void volume fraction appears to be greater than or equal to 0.6. The microstructural picture has been idealized to facilitate calculations of the spectral reflectance for films deposited onto nickel substrates and for freestanding or stripped films. In the idealized model, the metallic chromium is assumed to be in the form of spherical crystallites with concentric shells of Cr/sub 2/O/sub 3/ and the crystallite volume fraction is assumed to increase with depth into the film. Various experimental data are utilized to define film thickness, average volume fraction of Cr + Cr/sub 2/O/sub 3/, and volume ratio of Cr to Cr + Cr/sub 2/O/sub 3/. Both the Maxwell-Garnett (MG) and the Bruggeman effective medium theories for the dielectric constant of a composite media are reviewed. The extension of the MG theory to high inclusion volume fractions is discussed. Various forms of the MG theory and the Bruggeman theory are then utilized in reflectance calculations for both regular and stripped films.The results indicate that the MG formalism provides the best overall description of the optical response of black chrome films. Both model and experiment show that the solar absorptance initially decreases slowly as the amount of Cr/sub 2/O/sub 3/ increases; however a rapid decrease occurs when the Cr/sub 2/O/sub 3/ content passes 70 vol %.

  2. Relative entropy as model selection tool in cluster expansions

    NASA Astrophysics Data System (ADS)

    Kristensen, Jesper; Bilionis, Ilias; Zabaras, Nicholas

    2013-05-01

    Cluster expansions are simplified, Ising-like models for binary alloys in which vibrational and electronic degrees of freedom are coarse grained. The usual practice is to learn the parameters of the cluster expansion by fitting the energy they predict to a finite set of ab initio calculations. In some cases, experiments suggest that such approaches may lead to overestimation of the phase transition temperature. In this work, we present a novel approach to fitting the parameters based on the relative entropy framework which, instead of energies, attempts to fit the Boltzmann distribution of the configurational degrees of freedom. We show how this leads to T-dependent parameters.

  3. Causal Inference and Model Selection in Complex Settings

    NASA Astrophysics Data System (ADS)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  4. Mutation-selection models of coding sequence evolution with site-heterogeneous amino acid fitness profiles.

    PubMed

    Rodrigue, Nicolas; Philippe, Hervé; Lartillot, Nicolas

    2010-03-01

    Modeling the interplay between mutation and selection at the molecular level is key to evolutionary studies. To this end, codon-based evolutionary models have been proposed as pertinent means of studying long-range evolutionary patterns and are widely used. However, these approaches have not yet consolidated results from amino acid level phylogenetic studies showing that selection acting on proteins displays strong site-specific effects, which translate into heterogeneous amino acid propensities across the columns of alignments; related codon-level studies have instead focused on either modeling a single selective context for all codon columns, or a separate selective context for each codon column, with the former strategy deemed too simplistic and the latter deemed overparameterized. Here, we integrate recent developments in nonparametric statistical approaches to propose a probabilistic model that accounts for the heterogeneity of amino acid fitness profiles across the coding positions of a gene. We apply the model to a dozen real protein-coding gene alignments and find it to produce biologically plausible inferences, for instance, as pertaining to site-specific amino acid constraints, as well as distributions of scaled selection coefficients. In their account of mutational features as well as the heterogeneous regimes of selection at the amino acid level, the modeling approaches studied here can form a backdrop for several extensions, accounting for other selective features, for variable population size, or for subtleties of mutational features, all with parameterizations couched within population-genetic theory. PMID:20176949

  5. Testing goodness of fit of parametric models for censored data.

    PubMed

    Nysen, Ruth; Aerts, Marc; Faes, Christel

    2012-09-20

    We propose and study a goodness-of-fit test for left-censored, right-censored, and interval-censored data assuming random censorship. Main motivation comes from dietary exposure assessment in chemical risk assessment, where the determination of an appropriate distribution for concentration data is of major importance. We base the new goodness-of-fit test procedure proposed in this paper on the order selection test. As part of the testing procedure, we extend the null model to a series of nested alternative models for censored data. Then, we use a modified AIC model selection to select the best model to describe the data. If a model with one or more extra parameters is selected, then we reject the null hypothesis. As an alternative to the use of the asymptotic null distribution of the test statistic, we define a bootstrap-based procedure. We illustrate the applicability of the test procedure on data of cadmium concentrations and on data from the Signal Tandmobiel study and demonstrate its performance characteristics through simulation studies. PMID:22714389

  6. SELECTION OF CANDIDATE EUTROPHICATION MODELS FOR TOTAL MAXIMUM DAILY LOADS ANALYSES

    EPA Science Inventory

    A tiered approach was developed to evaluate candidate eutrophication models to select a common suite of models that could be used for Total Maximum Daily Loads (TMDL) analyses in estuaries, rivers, and lakes/reservoirs. Consideration for linkage to watershed models and ecologica...

  7. Many-Facet Rasch Model Selection Criteria: Examining Residuals and More.

    ERIC Educational Resources Information Center

    Schumacker, Randall E.

    This research examined the significance of facet selection in a multi-facet Rasch model analysis. The residuals or remaining error in a multi-facet Rasch model were further studied in the context of a full and reduced data-to-model fit chi-square, given the specific design. In addition, main effect facet contributions to person measures and the…

  8. Impacts of selected dietary polyphenols on caramelization in model systems.

    PubMed

    Zhang, Xinchen; Chen, Feng; Wang, Mingfu

    2013-12-15

    This study investigated the impacts of six dietary polyphenols (phloretin, naringenin, quercetin, epicatechin, chlorogenic acid and rosmarinic acid) on fructose caramelization in thermal model systems at either neutral or alkaline pH. These polyphenols were found to increase the browning intensity and antioxidant capacity of caramel. The chemical reactions in the system of sugar and polyphenol, which include formation of polyphenol-sugar adducts, were found to be partially responsible for the formation of brown pigments and heat-induced antioxidants based on instrumental analysis. In addition, rosmarinic acid was demonstrated to significantly inhibit the formation of 5-hydroxymethylfurfural (HMF). Thus this research added to the efforts of controlling caramelization by dietary polyphenols under thermal condition, and provided some evidence to propose dietary polyphenols as functional ingredients to modify the caramel colour and bioactivity as well as to lower the amount of heat-induced contaminants such as 5-hydroxymethylfurfural (HMF). PMID:23993506

  9. Establishing a Selection Process Model for an Ethnic Collection in a Prison Library.

    ERIC Educational Resources Information Center

    Haymann-Diaz, Barbara

    1989-01-01

    Describes a study that examined the selection process used by two inmate/library assistants to develop Hispanic and African ethnic collections in a prison library. The findings of the study are used to develop a selection process model for the development of ethnic collections using the expertise of ethnic inmate/library assistants. (25…

  10. Selecting a Response in Task Switching: Testing a Model of Compound Cue Retrieval

    ERIC Educational Resources Information Center

    Schneider, Darryl W.; Logan, Gordon D.

    2009-01-01

    How can a task-appropriate response be selected for an ambiguous target stimulus in task-switching situations? One answer is to use compound cue retrieval, whereby stimuli serve as joint retrieval cues to select a response from long-term memory. In the present study, the authors tested how well a model of compound cue retrieval could account for a…

  11. An Evaluation Model To Select an Integrated Learning System in a Large, Suburban School District.

    ERIC Educational Resources Information Center

    Curlette, William L.; And Others

    The systematic evaluation process used in Georgia's DeKalb County School System to purchase comprehensive instructional software--an integrated learning system (ILS)--is described, and the decision-making model for selection is presented. Selection and implementation of an ILS were part of an instructional technology plan for the DeKalb schools…

  12. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  13. A physically based model for dielectric charging in an integrated optical MEMS wavelength selective switch.

    SciTech Connect

    Nielson, Gregory N.; Barbastathis, George

    2005-07-01

    A physical parameter based model for dielectric charge accumulation is proposed and used to predict the displacement versus applied voltage and pull-in response of an electrostatic MEMS wavelength selective integrated optical switch.

  14. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    PubMed

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition. PMID:26232190

  15. Traditional and robust vector selection methods for use with similarity based models

    SciTech Connect

    Hines, J. W.; Garvey, D. R.

    2006-07-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  16. Modeling the Effect of Selection History on Pop-Out Visual Search

    PubMed Central

    Tseng, Yuan-Chi; Glaser, Joshua I.; Caddigan, Eamon; Lleras, Alejandro

    2014-01-01

    While attentional effects in visual selection tasks have traditionally been assigned “top-down” or “bottom-up” origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task. PMID:24595032

  17. Paying for Primary Care: The Factors Associated with Physician Self-selection into Payment Models.

    PubMed

    Rudoler, David; Deber, Raisa; Barnsley, Janet; Glazier, Richard H; Dass, Adrian Rohit; Laporte, Audrey

    2015-09-01

    To determine the factors associated with primary care physician self-selection into different payment models, we used a panel of eight waves of administrative data for all primary care physicians who practiced in Ontario between 2003/2004 and 2010/2011. We used a mixed effects logistic regression model to estimate physicians' choice of three alternative payment models: fee for service, enhanced fee for service, and blended capitation. We found that primary care physicians self-selected into payment models based on existing practice characteristics. Physicians with more complex patient populations were less likely to switch into capitation-based payment models where higher levels of effort were not financially rewarded. These findings suggested that investigations aimed at assessing the impact of different primary care reimbursement models on outcomes, including costs and access, should first account for potential selection effects.

  18. Paying for Primary Care: The Factors Associated with Physician Self-selection into Payment Models.

    PubMed

    Rudoler, David; Deber, Raisa; Barnsley, Janet; Glazier, Richard H; Dass, Adrian Rohit; Laporte, Audrey

    2015-09-01

    To determine the factors associated with primary care physician self-selection into different payment models, we used a panel of eight waves of administrative data for all primary care physicians who practiced in Ontario between 2003/2004 and 2010/2011. We used a mixed effects logistic regression model to estimate physicians' choice of three alternative payment models: fee for service, enhanced fee for service, and blended capitation. We found that primary care physicians self-selected into payment models based on existing practice characteristics. Physicians with more complex patient populations were less likely to switch into capitation-based payment models where higher levels of effort were not financially rewarded. These findings suggested that investigations aimed at assessing the impact of different primary care reimbursement models on outcomes, including costs and access, should first account for potential selection effects. PMID:26190516

  19. Generative model selection using a scalable and size-independent complex network classifier

    SciTech Connect

    Motallebi, Sadegh Aliakbary, Sadegh Habibi, Jafar

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.

  20. Source-mask selection using computational lithography: further investigation incorporating rigorous resist models

    NASA Astrophysics Data System (ADS)

    Kapasi, Sanjay; Robertson, Stewart; Biafore, John; Smith, Mark D.

    2009-12-01

    Recent publications have emphasized the criticality of computational lithography in source-mask selection for 32 and 22 nm technology nodes. Lithographers often select the illuminator geometries based on analyzing aerial images for a limited set of structures using computational lithography tools. Last year, Biafore, et al1 demonstrated the divergence between aerial image models and resist models in computational lithography. In a follow-up study2, it was illustrated that optimal illuminator is different when selected based on resist model in contrast to aerial image model. In the study, optimal source shapes were evaluated for 1D logic patterns using aerial image model and two distinct commercial resist models. Physics based lumped parameter resist model (LPM) was used. Accurately calibrated full physical models are portable across imaging conditions compared to the lumped models. This study will be an extension of previous work. Full physical resist models (FPM) with calibrated resist parameters3,4,5,6 will be used in selecting optimum illumination geometries for 1D logic patterns. Several imaging parameters - like Numerical Aperture (NA), source geometries (Annular, Quadrupole, etc.), illumination configurations for different sizes and pitches will be explored in the study. Our goal is to compare and analyze the optimal source-shapes across various imaging conditions. In the end, the optimal source-mask solution for given set of designs based on all the models will be recommended.

  1. A model of face selection in viewing video stories

    PubMed Central

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  2. A model of face selection in viewing video stories.

    PubMed

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the "peak" face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  3. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  4. Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.

    PubMed

    Ueki, Masao; Tamiya, Gen

    2016-04-01

    We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods.

  5. Models of Preconception Care Implementation in Selected Countries

    PubMed Central

    Lo, Sue Seen-Tsing; Zhuo, Jiatong; Han, Jung-Yeol; Delvoye, Pierre; Zhu, Li

    2006-01-01

    Globally, maternal and child health faces diverse challenges depending on the status of the development of the country. Some countries have introduced or explored preconception care for various reasons. Falling birth rates and increasing knowledge about risk factors for adverse pregnancy outcomes led to the introduction of preconception care in Hong Kong in 1998, and South Korea in 2004. In Hong Kong, comprehensive preconception care including laboratory tests are provided to over 4000 women each year at a cost of $75 per person. In Korea, about 60% of the women served have known medical risk history, and the challenge is to expand the program capacity to all women who plan pregnancy, and conducting social marketing. Belgium has established an ad hoc-committee to develop a comprehensive social marketing and professional training strategy for pilot testing preconception care models in the French speaking part of Belgium, an area that represents 5 million people and 50,000 births per year using prenatal care and pediatric clinics, gynecological departments, and the genetic centers. In China, Guangxi province piloted preconceptional HIV testing and counseling among couples who sought the then mandatory premarital medical examination as a component of the three-pronged approach to reduce mother to child transmission of HIV. HIV testing rates among couples increased from 38% to 62% over one year period. In October 2003, China changed the legal requirement of premarital medical examination from mandatory to “voluntary.” This change was interpreted by most women that the premarital health examination was “unnecessary” and overall premarital health examination rates dropped. Social marketing efforts piloted in 2004 indicated that 95% of women were willing to pay up to RMB 100 (US$12) for preconception health care services. These case studies illustrate programmatic feasibility of preconception care services to address maternal and child health and other public

  6. QSAR modeling for quinoxaline derivatives using genetic algorithm and simulated annealing based feature selection.

    PubMed

    Ghosh, P; Bagchi, M C

    2009-01-01

    With a view to the rational design of selective quinoxaline derivatives, 2D and 3D-QSAR models have been developed for the prediction of anti-tubercular activities. Successful implementation of a predictive QSAR model largely depends on the selection of a preferred set of molecular descriptors that can signify the chemico-biological interaction. Genetic algorithm (GA) and simulated annealing (SA) are applied as variable selection methods for model development. 2D-QSAR modeling using GA or SA based partial least squares (GA-PLS and SA-PLS) methods identified some important topological and electrostatic descriptors as important factor for tubercular activity. Kohonen network and counter propagation artificial neural network (CP-ANN) considering GA and SA based feature selection methods have been applied for such QSAR modeling of Quinoxaline compounds. Out of a variable pool of 380 molecular descriptors, predictive QSAR models are developed for the training set and validated on the test set compounds and a comparative study of the relative effectiveness of linear and non-linear approaches has been investigated. Further analysis using 3D-QSAR technique identifies two models obtained by GA-PLS and SA-PLS methods leading to anti-tubercular activity prediction. The influences of steric and electrostatic field effects generated by the contribution plots are discussed. The results indicate that SA is a very effective variable selection approach for such 3D-QSAR modeling.

  7. Differences between selection on sex versus recombination in red queen models with diploid hosts.

    PubMed

    Agrawal, Aneil F

    2009-08-01

    The Red Queen hypothesis argues that parasites generate selection for genetic mixing (sex and recombination) in their hosts. A number of recent papers have examined this hypothesis using models with haploid hosts. In these haploid models, sex and recombination are selectively equivalent. However, sex and recombination are not equivalent in diploids because selection on sex depends on the consequences of segregation as well as recombination. Here I compare how parasites select on modifiers of sexual reproduction and modifiers of recombination rate. Across a wide set of parameters, parasites tend to select against both sex and recombination, though recombination is favored more often than is sex. There is little correspondence between the conditions favoring sex and those favoring recombination, indicating that the direction of selection on sex is often determined by the effects of segregation, not recombination. Moreover, when sex was favored it is usually due to a long-term advantage whereas short-term effects are often responsible for selection favoring recombination. These results strongly indicate that Red Queen models focusing exclusively on the effects of recombination cannot be used to infer the type of selection on sex that is generated by parasites on diploid hosts.

  8. Computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models

    NASA Astrophysics Data System (ADS)

    Sima, Aleksandra Anna; Bonaventura, Xavier; Feixas, Miquel; Sbert, Mateu; Howell, John Anthony; Viola, Ivan; Buckley, Simon John

    2013-03-01

    Photorealistic 3D models are used for visualization, interpretation and spatial measurement in many disciplines, such as cultural heritage, archaeology and geoscience. Using modern image- and laser-based 3D modelling techniques, it is normal to acquire more data than is finally used for 3D model texturing, as images may be acquired from multiple positions, with large overlap, or with different cameras and lenses. Such redundant image sets require sorting to restrict the number of images, increasing the processing efficiency and realism of models. However, selection of image subsets optimized for texturing purposes is an example of complex spatial analysis. Manual selection may be challenging and time-consuming, especially for models of rugose topography, where the user must account for occlusions and ensure coverage of all relevant model triangles. To address this, this paper presents a framework for computer-aided image geometry analysis and subset selection for optimizing texture quality in photorealistic models. The framework was created to offer algorithms for candidate image subset selection, whilst supporting refinement of subsets in an intuitive and visual manner. Automatic image sorting was implemented using algorithms originating in computer science and information theory, and variants of these were compared using multiple 3D models and covering image sets, collected for geological applications. The image subsets provided by the automatic procedures were compared to manually selected sets and their suitability for 3D model texturing was assessed. Results indicate that the automatic sorting algorithms are a promising alternative to manual methods. An algorithm based on a greedy solution to the weighted set-cover problem provided image sets closest to the quality and size of the manually selected sets. The improved automation and more reliable quality indicators make the photorealistic model creation workflow more accessible for application experts

  9. Double-input compartmental modeling and spectral analysis for the quantification of positron emission tomography data in oncology

    NASA Astrophysics Data System (ADS)

    Tomasi, G.; Kimberley, S.; Rosso, L.; Aboagye, E.; Turkheimer, F.

    2012-04-01

    In positron emission tomography (PET) studies involving organs different from the brain, ignoring the metabolite contribution to the tissue time-activity curves (TAC), as in the standard single-input (SI) models, may compromise the accuracy of the estimated parameters. We employed here double-input (DI) compartmental modeling (CM), previously used for [11C]thymidine, and a novel DI spectral analysis (SA) approach on the tracers 5-[18F]fluorouracil (5-[18F]FU) and [18F]fluorothymidine ([18F]FLT). CM and SA were performed initially with a SI approach using the parent plasma TAC as an input function. These methods were then employed using a DI approach with the metabolite plasma TAC as an additional input function. Regions of interest (ROIs) corresponding to healthy liver, kidneys and liver metastases for 5-[18F]FU and to tumor, vertebra and liver for [18F]FLT were analyzed. For 5-[18F]FU, the improvement of the fit quality with the DI approaches was remarkable; in CM, the Akaike information criterion (AIC) always selected the DI over the SI model. Volume of distribution estimates obtained with DI CM and DI SA were in excellent agreement, for both parent 5-[18F]FU (R2 = 0.91) and metabolite [18F]FBAL (R2 = 0.99). For [18F]FLT, the DI methods provided notable improvements but less substantial than for 5-[18F]FU due to the lower rate of metabolism of [18F]FLT. On the basis of the AIC values, agreement between [18F]FLT Ki estimated with the SI and DI models was good (R2 = 0.75) for the ROIs where the metabolite contribution was negligible, indicating that the additional input did not bias the parent tracer only-related estimates. When the AIC suggested a substantial contribution of the metabolite [18F]FLT-glucuronide, on the other hand, the change in the parent tracer only-related parameters was significant (R2 = 0.33 for Ki). Our results indicated that improvements of DI over SI approaches can range from moderate to substantial and are more significant for tracers with

  10. Pairwise Variable Selection for High-dimensional Model-based Clustering

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George

    2009-01-01

    SUMMARY Variable selection for clustering is an important and challenging problem in high-dimensional data analysis. Existing variable selection methods for model-based clustering select informative variables in a “one-in-all-out” manner; that is, a variable is selected if at least one pair of clusters is separable by this variable and removed if it cannot separate any of the clusters. In many applications, however, it is of interest to further establish exactly which clusters are separable by each informative variable. To address this question, we propose a pairwise variable selection method for high-dimensional model-based clustering. The method is based on a new pairwise penalty. Results on simulated and real data show that the new method performs better than alternative approaches which use ℓ1 and ℓ∞ penalties and offers better interpretation. PMID:19912170

  11. Statistical selection of multiple-input multiple-output nonlinear dynamic models of spike train transformation.

    PubMed

    Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W

    2007-01-01

    Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.

  12. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    SciTech Connect

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  13. Selecting Spatial Scale of Covariates in Regression Models of Environmental Exposures

    PubMed Central

    Grant, Lauren P.; Gennings, Chris; Wheeler, David C.

    2015-01-01

    Environmental factors or socioeconomic status variables used in regression models to explain environmental chemical exposures or health outcomes are often in practice modeled at the same buffer distance or spatial scale. In this paper, we present four model selection algorithms that select the best spatial scale for each buffer-based or area-level covariate. Contamination of drinking water by nitrate is a growing problem in agricultural areas of the United States, as ingested nitrate can lead to the endogenous formation of N-nitroso compounds, which are potent carcinogens. We applied our methods to model nitrate levels in private wells in Iowa. We found that environmental variables were selected at different spatial scales and that a model allowing spatial scale to vary across covariates provided the best goodness of fit. Our methods can be applied to investigate the association between environmental risk factors available at multiple spatial scales or buffer distances and measures of disease, including cancers. PMID:25983543

  14. Computational model of selection by consequences: patterns of preference change on concurrent schedules.

    PubMed

    Kulubekova, Saule; McDowell, J J

    2013-09-01

    The computational model of selection by consequences is an ontogenetic dynamic account of adaptive behavior based on the Darwinian principle of selection by consequences. The model is a virtual organism based on a genetic algorithm, a class of computational algorithms that instantiate the principles of selection, fitness, reproduction and mutation. The computational model has been thoroughly tested in experiments with a variety of single alternative and concurrent schedules. A number of published reports demonstrate that the model generates patterns of behavior that are quantitatively equivalent to the findings from live organisms. The experiments and analyses in this study assess the behavior of the computational model for evidence of preference change phenomena in environments with rapidly changing reinforcement rate ratios. Molar and molecular effects of behavioral adjustment were consistent with those observed in live organisms. The results of this study provide strong evidence supporting the selectionist account of adaptive behavior.

  15. INFUSE: Interactive Feature Selection for Predictive Modeling of High Dimensional Data.

    PubMed

    Krause, Josua; Perer, Adam; Bertini, Enrico

    2014-12-01

    Predictive modeling techniques are increasingly being used by data scientists to understand the probability of predicted outcomes. However, for data that is high-dimensional, a critical step in predictive modeling is determining which features should be included in the models. Feature selection algorithms are often used to remove non-informative features from models. However, there are many different classes of feature selection algorithms. Deciding which one to use is problematic as the algorithmic output is often not amenable to user interpretation. This limits the ability for users to utilize their domain expertise during the modeling process. To improve on this limitation, we developed INFUSE, a novel visual analytics system designed to help analysts understand how predictive features are being ranked across feature selection algorithms, cross-validation folds, and classifiers. We demonstrate how our system can lead to important insights in a case study involving clinical researchers predicting patient outcomes from electronic medical records.

  16. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  17. A model of two-way selection system for human behavior.

    PubMed

    Zhou, Bin; Qin, Shujia; Han, Xiao-Pu; He, Zhe; Xie, Jia-Rong; Wang, Bing-Hong

    2014-01-01

    Two-way selection is a common phenomenon in nature and society. It appears in the processes like choosing a mate between men and women, making contracts between job hunters and recruiters, and trading between buyers and sellers. In this paper, we propose a model of two-way selection system, and present its analytical solution for the expectation of successful matching total and the regular pattern that the matching rate trends toward an inverse proportion to either the ratio between the two sides or the ratio of the state total to the smaller group's people number. The proposed model is verified by empirical data of the matchmaking fairs. Results indicate that the model well predicts this typical real-world two-way selection behavior to the bounded error extent, thus it is helpful for understanding the dynamics mechanism of the real-world two-way selection system. PMID:24454687

  18. Model selection in the weighted generalized estimating equations for longitudinal data with dropout.

    PubMed

    Gosho, Masahiko

    2016-05-01

    We propose criteria for variable selection in the mean model and for the selection of a working correlation structure in longitudinal data with dropout missingness using weighted generalized estimating equations. The proposed criteria are based on a weighted quasi-likelihood function and a penalty term. Our simulation results show that the proposed criteria frequently select the correct model in candidate mean models. The proposed criteria also have good performance in selecting the working correlation structure for binary and normal outcomes. We illustrate our approaches using two empirical examples. In the first example, we use data from a randomized double-blind study to test the cancer-preventing effects of beta carotene. In the second example, we use longitudinal CD4 count data from a randomized double-blind study. PMID:26509243

  19. DEVELOPMENT OF AN AGGREGATION AND EPISODE SELECTION SCHEME TO SUPPORT THE MODELS-3 COMMUNITY MULTISCALE AIR QUALITY MODEL

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation of use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700-hPa east-west and north-south...

  20. Cross-validation pitfalls when selecting and assessing regression and classification models

    PubMed Central

    2014-01-01

    Background We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. Methods We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. Results We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. Conclusions We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error. PMID:24678909

  1. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    PubMed Central

    Indiveri, Giacomo

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  2. Natural and sexual selection giveth and taketh away reproductive barriers: models of population divergence in guppies.

    PubMed

    Labonne, Jacques; Hendry, Andrew P

    2010-07-01

    The standard predictions of ecological speciation might be nuanced by the interaction between natural and sexual selection. We investigated this hypothesis with an individual-based model tailored to the biology of guppies (Poecilia reticulata). We specifically modeled the situation where a high-predation population below a waterfall colonizes a low-predation population above a waterfall. Focusing on the evolution of male color, we confirm that divergent selection causes the appreciable evolution of male color within 20 generations. The rate and magnitude of this divergence were reduced when dispersal rates were high and when female choice did not differ between environments. Adaptive divergence was always coupled to the evolution of two reproductive barriers: viability selection against immigrants and hybrids. Different types of sexual selection, however, led to contrasting results for another potential reproductive barrier: mating success of immigrants. In some cases, the effects of natural and sexual selection offset each other, leading to no overall reproductive isolation despite strong adaptive divergence. Sexual selection acting through female choice can thus strongly modify the effects of divergent natural selection and thereby alter the standard predictions of ecological speciation. We also found that under no circumstances did divergent selection cause appreciable divergence in neutral genetic markers.

  3. The Performance of IRT Model Selection Methods with Mixed-Format Tests

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2012-01-01

    When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…

  4. The Student-Selection Process: A Model of Student Courses in Higher Education.

    ERIC Educational Resources Information Center

    Saunders, J. A.; Lancaster, G. A.

    Factors that affect college students' choice of studies and implications for colleges and universities that are competing for the declining numbers of students were assessed. A student-selection process model, derived from the innovation-decision model, provides some insights into the choice process and indicates the likely limitations of the…

  5. AN AGGREGATION AND EPISODE SELECTION SCHEME FOR EPA'S MODELS-3 CMAQ

    EPA Science Inventory

    The development of an episode selection and aggregation approach, designed to support distributional estimation for use with the Models-3 Community Multiscale Air Quality (CMAQ) model, is described. The approach utilized cluster analysis of the 700 hPa u and v wind field compo...

  6. Variable selection with random forest: Balancing stability, performance, and interpretation in ecological and environmental modeling

    EPA Science Inventory

    Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...

  7. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-03

    ... Natural Water Bodies AGENCY: Nuclear Regulatory Commission. ACTION: Withdrawal notice. SUMMARY: The U.S... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies....

  8. Perturbation Selection and Local Influence Analysis for Nonlinear Structural Equation Model

    ERIC Educational Resources Information Center

    Chen, Fei; Zhu, Hong-Tu; Lee, Sik-Yum

    2009-01-01

    Local influence analysis is an important statistical method for studying the sensitivity of a proposed model to model inputs. One of its important issues is related to the appropriate choice of a perturbation vector. In this paper, we develop a general method to select an appropriate perturbation vector and a second-order local influence measure…

  9. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops

    PubMed Central

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an “island model” inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of

  10. [Near-infrared spectrum quantitative analysis model based on principal components selected by elastic net].

    PubMed

    Chen, Wan-hui; Liu, Xu-hua; He, Xiong-kui; Min, Shun-geng; Zhang, Lu-da

    2010-11-01

    Elastic net is an improvement of the least-squares method by introducing in L1 and L2 penalties, and it has the advantages of the variable selection. The quantitative analysis model build by Elastic net can improve the prediction accuracy. Using 89 wheat samples as the experiment material, the spectrum principal components of the samples were selected by Elastic net. The analysis model was established for the near-infrared spectrum and the wheat's protein content, and the feasibility of using Elastic net to establish the quantitative analysis model was confirmed. In experiment, the 89 wheat samples were randomly divided into two groups, with 60 samples being the model set and 29 samples being the prediction set. The 60 samples were used to build analysis model to predict the protein contents of the 29 samples, and correlation coefficient (R) of the predicted value and chemistry observed value was 0. 984 9, with the mean relative error being 2.48%. To further investigate the feasibility and stability of the model, the 89 samples were randomly selected five times, with 60 samples to be model set and 29 samples to be prediction set. The five groups of principal components which were selected by Elastic net for building model were basically consistent, and compared with the PCR and PLS method, the model prediction accuracies were all better than PCR and similar with PLS. In view of the fact that Elastic net can realize the variable selection and the model has good prediction, it was shown that Elastic net is suitable method for building chemometrics quantitative analysis model. PMID:21284156

  11. Making good choices with variable information: a stochastic model for nest-site selection by honeybees.

    PubMed

    Perdriau, Benjamin S; Myerscough, Mary R

    2007-04-22

    A density-dependent Markov process model is constructed for information transfer among scouts during nest-site selection by honeybees (Apis mellifera). The effects of site quality, competition between sites and delays in site discovery are investigated. The model predicts that bees choose the better of two sites more reliably when both sites are of low quality than when both sites are of high quality and that delay in finding a second site has most effect on the final choice when both sites are of high quality. The model suggests that stochastic effects in honeybee nest-site selection confer no advantage on the swarm. PMID:17301012

  12. Model selection forecasts for the spectral index from the Planck satellite

    SciTech Connect

    Pahud, Cedric; Liddle, Andrew R.; Mukherjee, Pia; Parkinson, David

    2006-06-15

    The recent WMAP3 results have placed measurements of the spectral index n{sub S} in an interesting position. While parameter estimation techniques indicate that the Harrison-Zel'dovich spectrum n{sub S}=1 is strongly excluded (in the absence of tensor perturbations), Bayesian model selection techniques reveal that the case against n{sub S}=1 is not yet conclusive. In this paper, we forecast the ability of the Planck satellite mission to use Bayesian model selection to convincingly exclude (or favor) the Harrison-Zel'dovich model.

  13. Mode-selective quantization and multimodal effective models for spherically layered systems

    NASA Astrophysics Data System (ADS)

    Dzsotjan, D.; Rousseaux, B.; Jauslin, H. R.; des Francs, G. Colas; Couteau, C.; Guérin, S.

    2016-08-01

    We propose a geometry-specific, mode-selective quantization scheme in coupled field-emitter systems which makes it easy to include material and geometrical properties, and intrinsic losses, as well as the positions of an arbitrary number of quantum emitters. The method is presented through the example of a spherically symmetric, nonmagnetic, arbitrarily layered system. We follow it up by a framework to project the system on simpler, effective cavity QED models. Maintaining a well-defined connection to the original quantization, we derive the emerging effective quantities from the full, mode-selective model in a mathematically consistent way. We discuss the uses and limitations of these effective models.

  14. General kin selection models for genetic evolution of sib altruism in diploid and haplodiploid species.

    PubMed

    Levitt, P R

    1975-11-01

    A population genetic approach is presented for general analysis and comparison of kin selection models of sib and half-sib altruism. Nine models are described, each assuming a particular mode of inheritance, number of female inseminations, and Mendelian dominance of the altruist gene. In each model, the selective effects of altruism are described in terms of two general fitness functions, A(beta) and S(beta), giving respectively the expected fitness of an altruist and a nonaltruist as a function of the fraction of altruists beta in a given sibship. For each model, exact conditions are reported for stability at altruist and nonaltruist fixation. Under the Table 3 axions, the stability conditions may then be partially ordered on the basis of implications holding between pairs of conditions. The partial orderings are compared with predictions of the kin selection theory of Hamilton.

  15. Real-world datasets for portfolio selection and solutions of some stochastic dominance portfolio models.

    PubMed

    Bruni, Renato; Cesarone, Francesco; Scozzari, Andrea; Tardella, Fabio

    2016-09-01

    A large number of portfolio selection models have appeared in the literature since the pioneering work of Markowitz. However, even when computational and empirical results are described, they are often hard to replicate and compare due to the unavailability of the datasets used in the experiments. We provide here several datasets for portfolio selection generated using real-world price values from several major stock markets. The datasets contain weekly return values, adjusted for dividends and for stock splits, which are cleaned from errors as much as possible. The datasets are available in different formats, and can be used as benchmarks for testing the performances of portfolio selection models and for comparing the efficiency of the algorithms used to solve them. We also provide, for these datasets, the portfolios obtained by several selection strategies based on Stochastic Dominance models (see "On Exact and Approximate Stochastic Dominance Strategies for Portfolio Selection" (Bruni et al. [2])). We believe that testing portfolio models on publicly available datasets greatly simplifies the comparison of the different portfolio selection strategies. PMID:27508232

  16. Real-world datasets for portfolio selection and solutions of some stochastic dominance portfolio models.

    PubMed

    Bruni, Renato; Cesarone, Francesco; Scozzari, Andrea; Tardella, Fabio

    2016-09-01

    A large number of portfolio selection models have appeared in the literature since the pioneering work of Markowitz. However, even when computational and empirical results are described, they are often hard to replicate and compare due to the unavailability of the datasets used in the experiments. We provide here several datasets for portfolio selection generated using real-world price values from several major stock markets. The datasets contain weekly return values, adjusted for dividends and for stock splits, which are cleaned from errors as much as possible. The datasets are available in different formats, and can be used as benchmarks for testing the performances of portfolio selection models and for comparing the efficiency of the algorithms used to solve them. We also provide, for these datasets, the portfolios obtained by several selection strategies based on Stochastic Dominance models (see "On Exact and Approximate Stochastic Dominance Strategies for Portfolio Selection" (Bruni et al. [2])). We believe that testing portfolio models on publicly available datasets greatly simplifies the comparison of the different portfolio selection strategies.

  17. Evaluation of two outlier-detection-based methods for detecting tissue-selective genes from microarray data.

    PubMed

    Kadota, Koji; Konishi, Tomokazu; Shimizu, Kentaro

    2007-01-01

    Large-scale expression profiling using DNA microarrays enables identification of tissue-selective genes for which expression is considerably higher and/or lower in some tissues than in others. Among numerous possible methods, only two outlier-detection-based methods (an AIC-based method and Sprent's non-parametric method) can treat equally various types of selective patterns, but they produce substantially different results. We investigated the performance of these two methods for different parameter settings and for a reduced number of samples. We focused on their ability to detect selective expression patterns robustly. We applied them to public microarray data collected from 36 normal human tissue samples and analyzed the effects of both changing the parameter settings and reducing the number of samples. The AIC-based method was more robust in both cases. The findings confirm that the use of the AIC-based method in the recently proposed ROKU method for detecting tissue-selective expression patterns is correct and that Sprent's method is not suitable for ROKU. PMID:19936074

  18. A selection model for accounting for publication bias in a full network meta-analysis.

    PubMed

    Mavridis, Dimitris; Welton, Nicky J; Sutton, Alex; Salanti, Georgia

    2014-12-30

    Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency.

  19. [Research on direct forming of comminuted fracture surgery orienting model by selective laser melting].

    PubMed

    He, Xingrong; Yang, Yongqiang; Wu, Weihui; Wang, Di; Ding, Huanwen; Huang, Weihong

    2010-06-01

    In order to simplify the distal femoral comminuted fracture surgery and improve the accuracy of the parts to be reset, a kind of surgery orienting model for the surgery operation was designed according to the scanning data of computer tomography and the three-dimensional reconstruction image. With the use of DiMetal-280 selective laser melting rapid prototyping system, the surgery orienting model of 316L stainless steel was made through orthogonal experiment for processing parameter optimization. The technology of direct manufacturing of surgery orienting model by selective laser melting was noted to have obvious superiority with high speed, precise profile and good accuracy in size when compared with the conventional one. The model was applied in a real surgical operation for thighbone replacement; it worked well. The successful development of the model provides a new method for the automatic manufacture of customized surgery model, thus building a foundation for more clinical applications in the future.

  20. Bayesian model selection without evidences: application to the dark energy equation-of-state

    NASA Astrophysics Data System (ADS)

    Hee, S.; Handley, W. J.; Hobson, M. P.; Lasenby, A. N.

    2016-01-01

    A method is presented for Bayesian model selection without explicitly computing evidences, by using a combined likelihood and introducing an integer model selection parameter n so that Bayes factors, or more generally posterior odds ratios, may be read off directly from the posterior of n. If the total number of models under consideration is specified a priori, the full joint parameter space (θ, n) of the models is of fixed dimensionality and can be explored using standard Markov chain Monte Carlo (MCMC) or nested sampling methods, without the need for reversible jump MCMC techniques. The posterior on n is then obtained by straightforward marginalization. We demonstrate the efficacy of our approach by application to several toy models. We then apply it to constraining the dark energy equation of state using a free-form reconstruction technique. We show that Λ cold dark matter is significantly favoured over all extensions, including the simple w(z) = constant model.

  1. [Research on direct forming of comminuted fracture surgery orienting model by selective laser melting].

    PubMed

    He, Xingrong; Yang, Yongqiang; Wu, Weihui; Wang, Di; Ding, Huanwen; Huang, Weihong

    2010-06-01

    In order to simplify the distal femoral comminuted fracture surgery and improve the accuracy of the parts to be reset, a kind of surgery orienting model for the surgery operation was designed according to the scanning data of computer tomography and the three-dimensional reconstruction image. With the use of DiMetal-280 selective laser melting rapid prototyping system, the surgery orienting model of 316L stainless steel was made through orthogonal experiment for processing parameter optimization. The technology of direct manufacturing of surgery orienting model by selective laser melting was noted to have obvious superiority with high speed, precise profile and good accuracy in size when compared with the conventional one. The model was applied in a real surgical operation for thighbone replacement; it worked well. The successful development of the model provides a new method for the automatic manufacture of customized surgery model, thus building a foundation for more clinical applications in the future. PMID:20649010

  2. How Reliable is Bayesian Model Averaging Under Noisy Data? Statistical Assessment and Implications for Robust Model Selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Wöhling, Thomas; Nowak, Wolfgang

    2014-05-01

    Bayesian model averaging ranks the predictive capabilities of alternative conceptual models based on Bayes' theorem. The individual models are weighted with their posterior probability to be the best one in the considered set of models. Finally, their predictions are combined into a robust weighted average and the predictive uncertainty can be quantified. This rigorous procedure does, however, not yet account for possible instabilities due to measurement noise in the calibration data set. This is a major drawback, since posterior model weights may suffer a lack of robustness related to the uncertainty in noisy data, which may compromise the reliability of model ranking. We present a new statistical concept to account for measurement noise as source of uncertainty for the weights in Bayesian model averaging. Our suggested upgrade reflects the limited information content of data for the purpose of model selection. It allows us to assess the significance of the determined posterior model weights, the confidence in model selection, and the accuracy of the quantified predictive uncertainty. Our approach rests on a brute-force Monte Carlo framework. We determine the robustness of model weights against measurement noise by repeatedly perturbing the observed data with random realizations of measurement error. Then, we analyze the induced variability in posterior model weights and introduce this "weighting variance" as an additional term into the overall prediction uncertainty analysis scheme. We further determine the theoretical upper limit in performance of the model set which is imposed by measurement noise. As an extension to the merely relative model ranking, this analysis provides a measure of absolute model performance. To finally decide, whether better data or longer time series are needed to ensure a robust basis for model selection, we resample the measurement time series and assess the convergence of model weights for increasing time series length. We illustrate

  3. Model selection and change detection for a time-varying mean in process monitoring

    NASA Astrophysics Data System (ADS)

    Burr, Tom; Hamada, Michael S.; Ticknor, Larry; Weaver, Brian

    2014-07-01

    Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation is an old topic; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of alarm threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual=data-prediction. This paper briefly reviews alarm threshold estimation, introduces model selection options, and considers several assumptions regarding the data-generating mechanism for PM residuals. Four PM examples from nuclear safeguards are included. One example involves frequent by-batch material balance closures where a dissolution vessel has time-varying efficiency, leading to time-varying material holdup. Another example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals. Our main focus is model selection to select a defensible model for normal behavior with a time-varying mean in a PM residual stream. We use approximate Bayesian computation to perform the model selection and parameter estimation for normal behavior. We then describe a simple lag-one-differencing option similar to that used to monitor non-stationary times series to monitor for off-normal behavior.

  4. The effects of modeling contingencies in the treatment of food selectivity in children with autism.

    PubMed

    Fu, Sherrene B; Penrod, Becky; Fernand, Jonathan K; Whelan, Colleen M; Griffith, Kristin; Medved, Shannon

    2015-11-01

    The current study investigated the effectiveness of stating and modeling contingencies in increasing food consumption for two children with food selectivity. Results suggested that stating and modeling a differential reinforcement (DR) contingency for food consumption was effective in increasing consumption of two target foods for one child, and stating and modeling a DR plus nonremoval of the spoon contingency was effective in increasing consumption of the remaining food for the first child and all target foods for the second child. PMID:26134303

  5. A signal integration model of thymic selection and natural regulatory T cell commitment.

    PubMed

    Khailaie, Sahamoddin; Robert, Philippe A; Toker, Aras; Huehn, Jochen; Meyer-Hermann, Michael

    2014-12-15

    The extent of TCR self-reactivity is the basis for selection of a functional and self-tolerant T cell repertoire and is quantified by repeated engagement of TCRs with a diverse pool of self-peptides complexed with self-MHC molecules. The strength of a TCR signal depends on the binding properties of a TCR to the peptide and the MHC, but it is not clear how the specificity to both components drives fate decisions. In this study, we propose a TCR signal-integration model of thymic selection that describes how thymocytes decide among distinct fates, not only based on a single TCR-ligand interaction, but taking into account the TCR stimulation history. These fates are separated based on sustained accumulated signals for positive selection and transient peak signals for negative selection. This spans up the cells into a two-dimensional space where they are either neglected, positively selected, negatively selected, or selected as natural regulatory T cells (nTregs). We show that the dynamics of the integrated signal can serve as a successful basis for extracting specificity of thymocytes to MHC and detecting the existence of cognate self-peptide-MHC. It allows to select a self-MHC-biased and self-peptide-tolerant T cell repertoire. Furthermore, nTregs in the model are enriched with MHC-specific TCRs. This allows nTregs to be more sensitive to activation and more cross-reactive than conventional T cells. This study provides a mechanistic model showing that time integration of TCR-mediated signals, as opposed to single-cell interaction events, is needed to gain a full view on the properties emerging from thymic selection. PMID:25392533

  6. Prattville intake, Lake Almanor, California, hydraulic model study on selective withdrawal modifications. Final report

    SciTech Connect

    Vermeyen, T.

    1995-07-01

    Bureau of Reclamation conducted this hydraulic model study to provide Pacific Gas and Electric Company with an evaluation of several selective withdrawal structures that are being considered to reduce intake flow temperatures through the Prattville Intake at Lake Almanor, California. Release temperature control using selective withdrawal structures is being considered in an effort to improve the cold-water fishery in the North Fork of the Feather River.

  7. Using an immune system model to explore mate selection in genetic algorithms.

    SciTech Connect

    Huang, C. F.

    2003-01-01

    In the setting of multimodal function optimization, engineering and machine learning, identifying multiple peaks and maintaining subpopulations of the search space are two central themes when Genetic Algorithms (GAs) are employed. In this paper, an immune system model is adopted to develop a framework for exploring the role of mate selection in GAs with respect to these two issues. The experimental results reported in the paper will shed more light into how mate selection schemes compare to traditional selection schemes. In particular, we show that dissimilar mating is beneficial in identifying multiple peaks, yet harmful in maintaining subpopulations of the search space.

  8. Understanding the link between sexual selection, sexual conflict and aging using crickets as a model.

    PubMed

    Archer, C Ruth; Hunt, John

    2015-11-01

    Aging evolved because the strength of natural selection declines over the lifetime of most organisms. Weak natural selection late in life allows the accumulation of deleterious mutations and may favor alleles that have positive effects on fitness early in life, but costly pleiotropic effects expressed later on. While this decline in natural selection is central to longstanding evolutionary explanations for aging, a role for sexual selection and sexual conflict in the evolution of lifespan and aging has only been identified recently. Testing how sexual selection and sexual conflict affect lifespan and aging is challenging as it requires quantifying male age-dependent reproductive success. This is difficult in the invertebrate model organisms traditionally used in aging research. Research using crickets (Orthoptera: Gryllidae), where reproductive investment can be easily measured in both sexes, has offered exciting and novel insights into how sexual selection and sexual conflict affect the evolution of aging, both in the laboratory and in the wild. Here we discuss how sexual selection and sexual conflict can be integrated alongside evolutionary and mechanistic theories of aging using crickets as a model. We then highlight the potential for research using crickets to further advance our understanding of lifespan and aging.

  9. Selecton 2007: advanced models for detecting positive and purifying selection using a Bayesian inference approach.

    PubMed

    Stern, Adi; Doron-Faigenboim, Adi; Erez, Elana; Martz, Eric; Bacharach, Eran; Pupko, Tal

    2007-07-01

    Biologically significant sites in a protein may be identified by contrasting the rates of synonymous (K(s)) and non-synonymous (K(a)) substitutions. This enables the inference of site-specific positive Darwinian selection and purifying selection. We present here Selecton version 2.2 (http://selecton.bioinfo.tau.ac.il), a web server which automatically calculates the ratio between K(a) and K(s) (omega) at each site of the protein. This ratio is graphically displayed on each site using a color-coding scheme, indicating either positive selection, purifying selection or lack of selection. Selecton implements an assembly of different evolutionary models, which allow for statistical testing of the hypothesis that a protein has undergone positive selection. Specifically, the recently developed mechanistic-empirical model is introduced, which takes into account the physicochemical properties of amino acids. Advanced options were introduced to allow maximal fine tuning of the server to the user's specific needs, including calculation of statistical support of the omega values, an advanced graphic display of the protein's 3-dimensional structure, use of different genetic codes and inputting of a pre-built phylogenetic tree. Selecton version 2.2 is an effective, user-friendly and freely available web server which implements up-to-date methods for computing site-specific selection forces, and the visualization of these forces on the protein's sequence and structure.

  10. MOMENT-BASED METHOD FOR RANDOM EFFECTS SELECTION IN LINEAR MIXED MODELS

    PubMed Central

    Ahn, Mihye; Lu, Wenbin

    2012-01-01

    The selection of random effects in linear mixed models is an important yet challenging problem in practice. We propose a robust and unified framework for automatically selecting random effects and estimating covariance components in linear mixed models. A moment-based loss function is first constructed for estimating the covariance matrix of random effects. Two types of shrinkage penalties, a hard thresholding operator and a new sandwich-type soft-thresholding penalty, are then imposed for sparse estimation and random effects selection. Compared with existing approaches, the new procedure does not require any distributional assumption on the random effects and error terms. We establish the asymptotic properties of the resulting estimator in terms of its consistency in both random effects selection and variance component estimation. Optimization strategies are suggested to tackle the computational challenges involved in estimating the sparse variance-covariance matrix. Furthermore, we extend the procedure to incorporate the selection of fixed effects as well. Numerical results show promising performance of the new approach in selecting both random and fixed effects and, consequently, improving the efficiency of estimating model parameters. Finally, we apply the approach to a data set from the Amsterdam Growth and Health study. PMID:23105913

  11. Probing cosmology with weak lensing selected clusters. II. Dark energy and f(R) gravity models

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Hamana, Takashi; Yoshida, Naoki

    2016-02-01

    Ongoing and future wide-field galaxy surveys can be used to locate a number of clusters of galaxies with cosmic shear measurement alone. We study constraints on cosmological models using statistics of weak lensing selected galaxy clusters. We extend our previous theoretical framework to model the statistical properties of clusters in variants of cosmological models as well as in the standard ΛCDM model. Weak lensing selection of clusters does not rely on conventional assumptions such as the relation between luminosity and mass and/or hydrostatic equilibrium, but a number of observational effects compromise robust identification. We use a large set of realistic mock weak lensing catalogs as well as analytic models to perform a Fisher analysis and make a forecast for constraining two competing cosmological models, the wCDM model and f(R) model proposed by Hu and Sawicki (2007, Phys. Rev. D, 76, 064004), with our lensing statistics. We show that weak lensing selected clusters are excellent probes of cosmology when combined with cosmic shear power spectrum even in the presence of galaxy shape noise and masked regions. With the information from weak lensing selected clusters, the precision of cosmological parameter estimates can be improved by a factor of ˜1.6 and ˜8 for the wCDM model and f(R) model, respectively. The Hyper Suprime-Cam survey with sky coverage of 1250 degrees squared can constrain the equation of state of dark energy w0 with a level of Δw0 ˜ 0.1. It can also constrain the additional scalar degree of freedom in the f(R) model with a level of |fR0| ˜ 5 × 10-6, when constraints from cosmic microwave background measurements are incorporated. Future weak lensing surveys with sky coverage of 20000 degrees squared will place tighter constraints on w0 and |fR0| even without cosmic microwave background measurements.

  12. Cross Validation for Selection of Cortical Interaction Models From Scalp EEG or MEG

    PubMed Central

    Cheung, Bing Leung Patrick; Nowak, Robert; Lee, Hyong Chol; van Drongelen, Wim; Van Veen, Barry D.

    2012-01-01

    A cross-validation (CV) method based on state-space framework is introduced for comparing the fidelity of different cortical interaction models to the measured scalp electroencephalogram (EEG) or magnetoencephalography (MEG) data being modeled. A state equation models the cortical interaction dynamics and an observation equation represents the scalp measurement of cortical activity and noise. The measured data are partitioned into training and test sets. The training set is used to estimate model parameters and the model quality is evaluated by computing test data innovations for the estimated model. Two CV metrics normalized mean square error and log-likelihood are estimated by averaging over different training/test partitions of the data. The effectiveness of this method of model selection is illustrated by comparing two linear modeling methods and two nonlinear modeling methods on simulated EEG data derived using both known dynamic systems and measured electrocorticography data from an epilepsy patient. PMID:22084038

  13. Hierarchical Classes Models for Three-Way Three-Mode Binary Data: Interrelations and Model Selection

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven

    2005-01-01

    Several hierarchical classes models can be considered for the modeling of three-way three-mode binary data, including the INDCLAS model (Leenen, Van Mechelen, De Boeck, and Rosenberg, 1999), the Tucker3-HICLAS model (Ceulemans,VanMechelen, and Leenen, 2003), the Tucker2-HICLAS model (Ceulemans and Van Mechelen, 2004), and the Tucker1-HICLAS model…

  14. Use of Thermodynamic Modeling for Selection of Electrolyte for Electrorefining of Magnesium from Aluminum Alloy Melts

    NASA Astrophysics Data System (ADS)

    Gesing, Adam J.; Das, Subodh K.

    2016-06-01

    With United States Department of Energy Advanced Research Project Agency funding, experimental proof-of-concept was demonstrated for RE-12TM electrorefining process of extraction of desired amount of Mg from recycled scrap secondary Al molten alloys. The key enabling technology for this process was the selection of the suitable electrolyte composition and operating temperature. The selection was made using the FactSage thermodynamic modeling software and the light metal, molten salt, and oxide thermodynamic databases. Modeling allowed prediction of the chemical equilibria, impurity contents in both anode and cathode products, and in the electrolyte. FactSage also provided data on the physical properties of the electrolyte and the molten metal phases including electrical conductivity and density of the molten phases. Further modeling permitted selection of electrode and cell construction materials chemically compatible with the combination of molten metals and the electrolyte.

  15. The effect of smoking on health using a sequential self-selection model.

    PubMed

    Lahiri, K; Song, J G

    2000-09-01

    We estimate a structural model of individual smoking behaviour emphasizing the role of individual risk belief on smoking choices. Our model consists of five equations: two selection equations for initiation and cessation decisions, and three switching outcome regressions for nonsmokers, ex-smokers, and current smokers. The presence of significant self-selectivity implies that the health effects of smoking based on sample proportions do not correctly indicate the true risk of cigarette smoking. Further, our evidence suggests that the self-selection in the cessation decision, but not in the initiation decision, is consistent with economic rationality. We estimate the model by full information maximum likelihood (FIML) with starting values from heteroskedasticity corrected Heckman-Lee two-step method using newly released Health and Retirement Study (HRS) data.

  16. Androgen receptor polyglutamine repeat number: models of selection and disease susceptibility

    PubMed Central

    Ryan, Calen P; Crespi, Bernard J

    2013-01-01

    Variation in polyglutamine repeat number in the androgen receptor (AR CAGn) is negatively correlated with the transcription of androgen-responsive genes and is associated with susceptibility to an extensive list of human disease. Only a small portion of the heritability for many of these diseases is explained by conventional SNP-based genome-wide association studies, and the forces shaping AR CAGn among humans remains largely unexplored. Here, we propose evolutionary models for understanding selection at the AR CAG locus, namely balancing selection, sexual conflict, accumulation-selection, and antagonistic pleiotropy. We evaluate these models by examining AR CAGn-linked susceptibility to eight extensively studied diseases representing the diverse physiological roles of androgens, and consider the costs of these diseases by their frequency and fitness effects. Five diseases could contribute to the distribution of AR CAGn observed among contemporary human populations. With support for disease susceptibilities associated with long and short AR CAGn, balancing selection provides a useful model for studying selection at this locus. Gender-specific differences AR CAGn health effects also support this locus as a candidate for sexual conflict over repeat number. Accompanied by the accumulation of AR CAGn in humans, these models help explain the distribution of repeat number in contemporary human populations. PMID:23467468

  17. Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems

    PubMed Central

    Toni, Tina; Welch, David; Strelkowa, Natalja; Ipsen, Andreas; Stumpf, Michael P.H.

    2008-01-01

    Approximate Bayesian computation (ABC) methods can be used to evaluate posterior distributions without having to calculate likelihoods. In this paper, we discuss and apply an ABC method based on sequential Monte Carlo (SMC) to estimate parameters of dynamical models. We show that ABC SMC provides information about the inferability of parameters and model sensitivity to changes in parameters, and tends to perform better than other ABC approaches. The algorithm is applied to several well-known biological systems, for which parameters and their credible intervals are inferred. Moreover, we develop ABC SMC as a tool for model selection; given a range of different mathematical descriptions, ABC SMC is able to choose the best model using the standard Bayesian model selection apparatus. PMID:19205079

  18. (De)constructing the ryanodine receptor: modeling ion permeation and selectivity of the calcium release channel.

    PubMed

    Gillespie, Dirk; Xu, Le; Wang, Ying; Meissner, Gerhard

    2005-08-18

    Biological ion channels are proteins that passively conduct ions across membranes that are otherwise impermeable to ions. Here, we present a model of ion permeation and selectivity through a single, open ryanodine receptor (RyR) ion channel. Combining recent mutation data with electrodiffusion of finite-sized ions, the model reproduces the current/voltage curves of cardiac RyR (RyR2) in KCl, LiCl, NaCl, RbCl, CsCl, CaCl(2), MgCl(2), and their mixtures over large concentrations and applied voltage ranges. It also reproduces the reduced K(+) conductances and Ca(2+) selectivity of two skeletal muscle RyR (RyR1) mutants (D4899N and E4900Q). The model suggests that the selectivity filter of RyR contains the negatively charged residue D4899 that dominates the permeation and selectivity properties and gives RyR a DDDD locus similar to the EEEE locus of the L-type calcium channel. In contrast to previously applied barrier models, the current model describes RyR as a multi-ion channel with approximately three monovalent cations in the selectivity filter at all times. Reasons for the contradicting occupancy predictions are discussed. In addition, the model predicted an anomalous mole fraction effect for Na(+)/Cs(+) mixtures, which was later verified by experiment. Combining these results, the binding selectivity of RyR appears to be driven by the same charge/space competition mechanism of other highly charged channels.

  19. Selecting a linear mixed model for longitudinal data: repeated measures analysis of variance, covariance pattern model, and growth curve approaches.

    PubMed

    Liu, Siwei; Rovine, Michael J; Molenaar, Peter C M

    2012-03-01

    With increasing popularity, growth curve modeling is more and more often considered as the 1st choice for analyzing longitudinal data. Although the growth curve approach is often a good choice, other modeling strategies may more directly answer questions of interest. It is common to see researchers fit growth curve models without considering alterative modeling strategies. In this article we compare 3 approaches for analyzing longitudinal data: repeated measures analysis of variance, covariance pattern models, and growth curve models. As all are members of the general linear mixed model family, they represent somewhat different assumptions about the way individuals change. These assumptions result in different patterns of covariation among the residuals around the fixed effects. In this article, we first indicate the kinds of data that are appropriately modeled by each and use real data examples to demonstrate possible problems associated with the blanket selection of the growth curve model. We then present a simulation that indicates the utility of Akaike information criterion and Bayesian information criterion in the selection of a proper residual covariance structure. The results cast doubt on the popular practice of automatically using growth curve modeling for longitudinal data without comparing the fit of different models. Finally, we provide some practical advice for assessing mean changes in the presence of correlated data.

  20. On selecting reference image models for anomaly detection in industrial systems

    NASA Astrophysics Data System (ADS)

    Xiao, Xinhua; Quan, Jin; Ferro, Andrew; Han, Chia Y.; Zhou, Xuefu; Wee, William G.

    2013-09-01

    Automatic X-ray inspection of industrial parts usually uses reference-based methods, in which a set of model images or statistics extracted from the model image set are selected as the benchmark. Based on these methods, many systems are developed and are used extensively for anomaly detection. However, the performance of these systems relies heavily on the model image set. Thus, the selection of the model images is very important. This paper presents an approach for automatically selecting a set of model images to be used in a reference-based assisted defect recognition (ADR) system for anomaly detection of turbine blades of jet engines. The proposed approach to generating a model image set is based on feature extraction. Features are extracted from callout images of ADR, including potential defect indication type, size and location. Experimental results show that the proposed approach is fast and a low false alarm rate with acceptable detection rate is ensured. Moreover, the approach is applicable to different blade types and varied views of the blade. Further validation shows that the approach can be applied to the update of the model image set, when more images are generated from new blades and the model becomes inaccurate for anomaly detection in the new images.

  1. Journal selection decisions: a biomedical library operations research model. I. The framework.

    PubMed Central

    Kraft, D H; Polacsek, R A; Soergel, L; Burns, K; Klair, A

    1976-01-01

    The problem of deciding which journal titles to select for acquisition in a biomedical library is modeled. The approach taken is based on cost/benefit ratios. Measures of journal worth, methods of data collection, and journal cost data are considered. The emphasis is on the development of a practical process for selecting journal titles, based on the objectivity and rationality of the model; and on the collection of the approprate data and library statistics in a reasonable manner. The implications of this process towards an overall management information system (MIS) for biomedical serials handling are discussed. PMID:820391

  2. Using the Animal Model to Accelerate Response to Selection in a Self-Pollinating Crop

    PubMed Central

    Cowling, Wallace A.; Stefanova, Katia T.; Beeck, Cameron P.; Nelson, Matthew N.; Hargreaves, Bonnie L. W.; Sass, Olaf; Gilmour, Arthur R.; Siddique, Kadambot H. M.

    2015-01-01

    We used the animal model in S0 (F1) recurrent selection in a self-pollinating crop including, for the first time, phenotypic and relationship records from self progeny, in addition to cross progeny, in the pedigree. We tested the model in Pisum sativum, the autogamous annual species used by Mendel to demonstrate the particulate nature of inheritance. Resistance to ascochyta blight (Didymella pinodes complex) in segregating S0 cross progeny was assessed by best linear unbiased prediction over two cycles of selection. Genotypic concurrence across cycles was provided by pure-line ancestors. From cycle 1, 102/959 S0 plants were selected, and their S1 self progeny were intercrossed and selfed to produce 430 S0 and 575 S2 individuals that were evaluated in cycle 2. The analysis was improved by including all genetic relationships (with crossing and selfing in the pedigree), additive and nonadditive genetic covariances between cycles, fixed effects (cycles and spatial linear trends), and other random effects. Narrow-sense heritability for ascochyta blight resistance was 0.305 and 0.352 in cycles 1 and 2, respectively, calculated from variance components in the full model. The fitted correlation of predicted breeding values across cycles was 0.82. Average accuracy of predicted breeding values was 0.851 for S2 progeny of S1 parent plants and 0.805 for S0 progeny tested in cycle 2, and 0.878 for S1 parent plants for which no records were available. The forecasted response to selection was 11.2% in the next cycle with 20% S0 selection proportion. This is the first application of the animal model to cyclic selection in heterozygous populations of selfing plants. The method can be used in genomic selection, and for traits measured on S0-derived bulks such as grain yield. PMID:25943522

  3. Crossing statistic: Bayesian interpretation, model selection and resolving dark energy parametrization problem

    SciTech Connect

    Shafieloo, Arman

    2012-05-01

    By introducing Crossing functions and hyper-parameters I show that the Bayesian interpretation of the Crossing Statistics [1] can be used trivially for the purpose of model selection among cosmological models. In this approach to falsify a cosmological model there is no need to compare it with other models or assume any particular form of parametrization for the cosmological quantities like luminosity distance, Hubble parameter or equation of state of dark energy. Instead, hyper-parameters of Crossing functions perform as discriminators between correct and wrong models. Using this approach one can falsify any assumed cosmological model without putting priors on the underlying actual model of the universe and its parameters, hence the issue of dark energy parametrization is resolved. It will be also shown that the sensitivity of the method to the intrinsic dispersion of the data is small that is another important characteristic of the method in testing cosmological models dealing with data with high uncertainties.

  4. Adaptive fixation in two-locus models of stabilizing selection and genetic drift.

    PubMed

    Wollstein, Andreas; Stephan, Wolfgang

    2014-10-01

    The relationship between quantitative genetics and population genetics has been studied for nearly a century, almost since the existence of these two disciplines. Here we ask to what extent quantitative genetic models in which selection is assumed to operate on a polygenic trait predict adaptive fixations that may lead to footprints in the genome (selective sweeps). We study two-locus models of stabilizing selection (with and without genetic drift) by simulations and analytically. For symmetric viability selection we find that ∼16% of the trajectories may lead to fixation if the initial allele frequencies are sampled from the neutral site-frequency spectrum and the effect sizes are uniformly distributed. However, if the population is preadapted when it undergoes an environmental change (i.e., sits in one of the equilibria of the model), the fixation probability decreases dramatically. In other two-locus models with general viabilities or an optimum shift, the proportion of adaptive fixations may increase to >24%. Similarly, genetic drift leads to a higher probability of fixation. The predictions of alternative quantitative genetics models, initial conditions, and effect-size distributions are also discussed.

  5. Evaluation of intradural stimulation efficiency and selectivity in a computational model of spinal cord stimulation.

    PubMed

    Howell, Bryan; Lad, Shivanand P; Grill, Warren M

    2014-01-01

    Spinal cord stimulation (SCS) is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS, which, in turn

  6. Selection of Higher Order Regression Models in the Analysis of Multi-Factorial Transcription Data

    PubMed Central

    Prazeres da Costa, Olivia; Hoffman, Arthur; Rey, Johannes W.; Mansmann, Ulrich

    2014-01-01

    Introduction Many studies examine gene expression data that has been obtained under the influence of multiple factors, such as genetic background, environmental conditions, or exposure to diseases. The interplay of multiple factors may lead to effect modification and confounding. Higher order linear regression models can account for these effects. We present a new methodology for linear model selection and apply it to microarray data of bone marrow-derived macrophages. This experiment investigates the influence of three variable factors: the genetic background of the mice from which the macrophages were obtained, Yersinia enterocolitica infection (two strains, and a mock control), and treatment/non-treatment with interferon-γ. Results We set up four different linear regression models in a hierarchical order. We introduce the eruption plot as a new practical tool for model selection complementary to global testing. It visually compares the size and significance of effect estimates between two nested models. Using this methodology we were able to select the most appropriate model by keeping only relevant factors showing additional explanatory power. Application to experimental data allowed us to qualify the interaction of factors as either neutral (no interaction), alleviating (co-occurring effects are weaker than expected from the single effects), or aggravating (stronger than expected). We find a biologically meaningful gene cluster of putative C2TA target genes that appear to be co-regulated with MHC class II genes. Conclusions We introduced the eruption plot as a tool for visual model comparison to identify relevant higher order interactions in the analysis of expression data obtained under the influence of multiple factors. We conclude that model selection in higher order linear regression models should generally be performed for the analysis of multi-factorial microarray data. PMID:24658540

  7. Genetic variation and selection response in model breeding populations of Brassica rapa following a diversity bottleneck.

    PubMed

    Briggs, William H; Goldman, Irwin L

    2006-01-01

    Domestication and breeding share a common feature of population bottlenecks followed by significant genetic gain. To date, no crop models for investigating the evolution of genetic variance, selection response, and population diversity following bottlenecks have been developed. We developed a model artificial selection system in the laboratory using rapid-cycling Brassica rapa. Responses to 10 cycles of recurrent selection for cotyledon size were compared across a broad population founded with 200 individuals, three bottleneck populations initiated with two individuals each, and unselected controls. Additive genetic variance and heritability were significantly larger in the bottleneck populations prior to selection and this corresponded to a heightened response of bottleneck populations during the first three cycles. However, the overall response was ultimately greater and more sustained in the broad population. AFLP marker analyses revealed the pattern and extent of population subdivision were unaffected by a bottleneck even though the diversity retained in a selection population was significantly limited. Rapid gain in genetically more uniform bottlenecked populations, particularly in the short term, may offer an explanation for why domesticators and breeders have realized significant selection progress over relatively short time periods.

  8. Genomic Response to Selection for Predatory Behavior in a Mammalian Model of Adaptive Radiation.

    PubMed

    Konczal, Mateusz; Koteja, Paweł; Orlowska-Feuer, Patrycja; Radwan, Jacek; Sadowska, Edyta T; Babik, Wiesław

    2016-09-01

    If genetic architectures of various quantitative traits are similar, as studies on model organisms suggest, comparable selection pressures should produce similar molecular patterns for various traits. To test this prediction, we used a laboratory model of vertebrate adaptive radiation to investigate the genetic basis of the response to selection for predatory behavior and compare it with evolution of aerobic capacity reported in an earlier work. After 13 generations of selection, the proportion of bank voles (Myodes [=Clethrionomys] glareolus) showing predatory behavior was five times higher in selected lines than in controls. We analyzed the hippocampus and liver transcriptomes and found repeatable changes in allele frequencies and gene expression. Genes with the largest differences between predatory and control lines are associated with hunger, aggression, biological rhythms, and functioning of the nervous system. Evolution of predatory behavior could be meaningfully compared with evolution of high aerobic capacity, because the experiments and analyses were performed in the same methodological framework. The number of genes that changed expression was much smaller in predatory lines, and allele frequencies changed repeatably in predatory but not in aerobic lines. This suggests that more variants of smaller effects underlie variation in aerobic performance, whereas fewer variants of larger effects underlie variation in predatory behavior. Our results thus contradict the view that comparable selection pressures for different quantitative traits produce similar molecular patterns. Therefore, to gain knowledge about molecular-level response to selection for complex traits, we need to investigate not only multiple replicate populations but also multiple quantitative traits.

  9. Genomic Response to Selection for Predatory Behavior in a Mammalian Model of Adaptive Radiation.

    PubMed

    Konczal, Mateusz; Koteja, Paweł; Orlowska-Feuer, Patrycja; Radwan, Jacek; Sadowska, Edyta T; Babik, Wiesław

    2016-09-01

    If genetic architectures of various quantitative traits are similar, as studies on model organisms suggest, comparable selection pressures should produce similar molecular patterns for various traits. To test this prediction, we used a laboratory model of vertebrate adaptive radiation to investigate the genetic basis of the response to selection for predatory behavior and compare it with evolution of aerobic capacity reported in an earlier work. After 13 generations of selection, the proportion of bank voles (Myodes [=Clethrionomys] glareolus) showing predatory behavior was five times higher in selected lines than in controls. We analyzed the hippocampus and liver transcriptomes and found repeatable changes in allele frequencies and gene expression. Genes with the largest differences between predatory and control lines are associated with hunger, aggression, biological rhythms, and functioning of the nervous system. Evolution of predatory behavior could be meaningfully compared with evolution of high aerobic capacity, because the experiments and analyses were performed in the same methodological framework. The number of genes that changed expression was much smaller in predatory lines, and allele frequencies changed repeatably in predatory but not in aerobic lines. This suggests that more variants of smaller effects underlie variation in aerobic performance, whereas fewer variants of larger effects underlie variation in predatory behavior. Our results thus contradict the view that comparable selection pressures for different quantitative traits produce similar molecular patterns. Therefore, to gain knowledge about molecular-level response to selection for complex traits, we need to investigate not only multiple replicate populations but also multiple quantitative traits. PMID:27401229

  10. Discrete choice modeling of shovelnose sturgeon habitat selection in the Lower Missouri River

    USGS Publications Warehouse

    Bonnot, T.W.; Wildhaber, M.L.; Millspaugh, J.J.; DeLonay, A.J.; Jacobson, R.B.; Bryan, J.L.

    2011-01-01

    Substantive changes to physical habitat in the Lower Missouri River, resulting from intensive management, have been implicated in the decline of pallid (Scaphirhynchus albus) and shovelnose (S. platorynchus) sturgeon. To aid in habitat rehabilitation efforts, we evaluated habitat selection of gravid, female shovelnose sturgeon during the spawning season in two sections (lower and upper) of the Lower Missouri River in 2005 and in the upper section in 2007. We fit discrete choice models within an information theoretic framework to identify selection of means and variability in three components of physical habitat. Characterizing habitat within divisions around fish better explained selection than habitat values at the fish locations. In general, female shovelnose sturgeon were negatively associated with mean velocity between them and the bank and positively associated with variability in surrounding depths. For example, in the upper section in 2005, a 0.5ms-1 decrease in velocity within 10m in the bank direction increased the relative probability of selection 70%. In the upper section fish also selected sites with surrounding structure in depth (e.g., change in relief). Differences in models between sections and years, which are reinforced by validation rates, suggest that changes in habitat due to geomorphology, hydrology, and their interactions over time need to be addressed when evaluating habitat selection. Because of the importance of variability in surrounding depths, these results support an emphasis on restoring channel complexity as an objective of habitat restoration for shovelnose sturgeon in the Lower Missouri River. ?? 2011 Blackwell Verlag, Berlin.

  11. Discrete choice modeling of shovelnose sturgeon habitat selection in the Lower Missouri River

    USGS Publications Warehouse

    Bonnot, T.W.; Wildhaber, M.L.; Millspaugh, J.J.; DeLonay, A.J.; Jacobson, R.B.; Bryan, J.L.

    2011-01-01

    Substantive changes to physical habitat in the Lower Missouri River, resulting from intensive management, have been implicated in the decline of pallid (Scaphirhynchus albus) and shovelnose (S. platorynchus) sturgeon. To aid in habitat rehabilitation efforts, we evaluated habitat selection of gravid, female shovelnose sturgeon during the spawning season in two sections (lower and upper) of the Lower Missouri River in 2005 and in the upper section in 2007. We fit discrete choice models within an information theoretic framework to identify selection of means and variability in three components of physical habitat. Characterizing habitat within divisions around fish better explained selection than habitat values at the fish locations. In general, female shovelnose sturgeon were negatively associated with mean velocity between them and the bank and positively associated with variability in surrounding depths. For example, in the upper section in 2005, a 0.5 m s-1 decrease in velocity within 10 m in the bank direction increased the relative probability of selection 70%. In the upper section fish also selected sites with surrounding structure in depth (e.g., change in relief). Differences in models between sections and years, which are reinforced by validation rates, suggest that changes in habitat due to geomorphology, hydrology, and their interactions over time need to be addressed when evaluating habitat selection. Because of the importance of variability in surrounding depths, these results support an emphasis on restoring channel complexity as an objective of habitat restoration for shovelnose sturgeon in the Lower Missouri River.

  12. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  13. The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site

    SciTech Connect

    Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.; Li, Jun; Wei, Lingli; Yamamoto, Hajime; Gasda, Sarah E.; Hosseini, Seyyed; Nicot, Jean-Philippe; Birkholzer, Jens

    2015-05-23

    Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e, MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.

  14. Dynamics of Plant Mitochondrial Genome: Model of a Three-Level Selection Process

    PubMed Central

    Albert, B.; Godelle, B.; Atlan, A.; De-Paepe, R.; Gouyon, P. H.

    1996-01-01

    The plant mitochondrial genome is composed of a set of molecules of various sizes that generate each other through recombination between repeated sequences. Molecular observations indicate that these different molecules are present in an equilibrium state. Different compositions of molecules have been observed within species. Recombination could produce deleted molecules with a high replication rate but bearing little useful information for the cell (such as ``petite'' mutants in yeast). In this paper, we use a multilevel model to examine selection among rapidly replicating incomplete molecules and relatively slowly replicating complete molecules. Our model simulates the evolution of mitochondrial information through a three-level selection process including intermolecular, intermitochondrial, and intercellular selection. the model demonstrates that maintenance of the mitochondrial genome can result from multilevel selection, but maintenance is difficult to explain without the existence of selection at the intermitochondrial level. This study shows that compartmentation into mitochondria is useful for maintenance of the mitochondrial information. Our examination of evolutionary equilibria shows that different equilibria (with different combinations of molecules) can be obtained when recombination rates are lower than a threshold value. This may be interpreted as a drift-mutation balance. PMID:8878700

  15. The use of vector bootstrapping to improve variable selection precision in Lasso models.

    PubMed

    Laurin, Charles; Boomsma, Dorret; Lubke, Gitta

    2016-08-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections. Nesting cross-validation within bootstrapping could provide further improvements in precision, but this has not been investigated systematically. We performed simulation studies of Lasso variable selection precision (VSP) with and without nesting cross-validation within bootstrapping. Data were simulated to represent genomic data under a polygenic model as well as under a model with effect sizes representative of typical GWAS results. We compared these approaches to each other as well as to software defaults for the Lasso. Nested cross-validation had the most precise variable selection at small effect sizes. At larger effect sizes, there was no advantage to nesting. We illustrated the nested approach with empirical data comprising SNPs and SNP-SNP interactions from the most significant SNPs in a GWAS of borderline personality symptoms. In the empirical example, we found that the default Lasso selected low-reliability SNPs and interactions which were excluded by bootstrapping. PMID:27248122

  16. A Model-Based Approach for Identifying Signatures of Ancient Balancing Selection in Genetic Data

    PubMed Central

    DeGiorgio, Michael; Lohmueller, Kirk E.; Nielsen, Rasmus

    2014-01-01

    While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates. PMID:25144706

  17. Selection of sugar cane full-sib families using mixed models and ISSR markers.

    PubMed

    Almeida, L M; Viana, A P; Gonçalves, G M; Entringer, G C

    2014-01-01

    In 2006, an experiment examining families belonging to the first selection stage of the Sugar Cane Breeding Program of Universidade Federal Rural do Rio de Janeiro/Rede Interuniversitária para o Desenvolvimento do Setor Sucroalcooleiro was conducted. Families and plants within families were evaluated to select superior plants for subsequent stages of the breeding program. The experiment was arranged in a randomized block design, in which progenies were grouped into 4 sets, each with 4 replicates and 100 seedlings per plot. The following traits were evaluated: average stem diameter, total plot weight, number of stems, Brix of the lower stem, and Brix of the upper stem. The study of families used the restricted maximum likelihood/best linear unbiased procedure mixed models. After selection, families were genotyped via inter-simple sequence repeat to assess the genetic distance of genotypes. This approach was found to be efficient for selecting new genotypes. PMID:25501142

  18. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    DOE PAGES

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less

  19. Three-dimensional multiscale modeling of dendritic spacing selection during Al-Si directional solidification

    SciTech Connect

    Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain

    2015-05-27

    We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.

  20. Bayesian parameter inference and model selection by population annealing in systems biology.

    PubMed

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  1. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  2. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    PubMed Central

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  3. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  4. Application Of Decision Tree Approach To Student Selection Model- A Case Study

    NASA Astrophysics Data System (ADS)

    Harwati; Sudiya, Amby

    2016-01-01

    The main purpose of the institution is to provide quality education to the students and to improve the quality of managerial decisions. One of the ways to improve the quality of students is to arrange the selection of new students with a more selective. This research takes the case in the selection of new students at Islamic University of Indonesia, Yogyakarta, Indonesia. One of the university's selection is through filtering administrative selection based on the records of prospective students at the high school without paper testing. Currently, that kind of selection does not yet has a standard model and criteria. Selection is only done by comparing candidate application file, so the subjectivity of assessment is very possible to happen because of the lack standard criteria that can differentiate the quality of students from one another. By applying data mining techniques classification, can be built a model selection for new students which includes criteria to certain standards such as the area of origin, the status of the school, the average value and so on. These criteria are determined by using rules that appear based on the classification of the academic achievement (GPA) of the students in previous years who entered the university through the same way. The decision tree method with C4.5 algorithm is used here. The results show that students are given priority for admission is that meet the following criteria: came from the island of Java, public school, majoring in science, an average value above 75, and have at least one achievement during their study in high school.

  5. Using distance covariance for improved variable selection with application to learning genetic risk models.

    PubMed

    Kong, Jing; Wang, Sijian; Wahba, Grace

    2015-05-10

    Variable selection is of increasing importance to address the difficulties of high dimensionality in many scientific areas. In this paper, we demonstrate a property for distance covariance, which is incorporated in a novel feature screening procedure together with the use of distance correlation. The approach makes no distributional assumptions for the variables and does not require the specification of a regression model and hence is especially attractive in variable selection given an enormous number of candidate attributes without much information about the true model with the response. The method is applied to two genetic risk problems, where issues including uncertainty of variable selection via cross validation, subgroup of hard-to-classify cases, and the application of a reject option are discussed.

  6. PENALIZED VARIABLE SELECTION PROCEDURE FOR COX MODELS WITH SEMIPARAMETRIC RELATIVE RISK

    PubMed Central

    Ma, Shuangge; Liang, Hua

    2010-01-01

    We study the Cox models with semiparametric relative risk, which can be partially linear with one nonparametric component, or multiple additive or nonadditive nonparametric components. A penalized partial likelihood procedure is proposed to simultaneously estimate the parameters and select variables for both the parametric and the nonparametric parts. Two penalties are applied sequentially. The first penalty, governing the smoothness of the multivariate nonlinear covariate effect function, provides a smoothing spline ANOVA framework that is exploited to derive an empirical model selection tool for the nonparametric part. The second penalty, either the smoothly-clipped-absolute-deviation (SCAD) penalty or the adaptive LASSO penalty, achieves variable selection in the parametric part. We show that the resulting estimator of the parametric part possesses the oracle property, and that the estimator of the nonparametric part achieves the optimal rate of convergence. The proposed procedures are shown to work well in simulation experiments, and then applied to a real data example on sexually transmitted diseases. PMID:20802853

  7. Modeling the Temperature Fields of Copper Powder Melting in the Process of Selective Laser Melting

    NASA Astrophysics Data System (ADS)

    Saprykin, A. A.; Ibragimov, E. A.; Babakova, E. V.

    2016-08-01

    Various process variables influence on the quality of the end product when SLM (Selective Laser Melting) synthesizing items of powder materials. The authors of the paper suggest using the model of distributing the temperature fields when forming single tracks and layers of copper powder PMS-1. Relying on the results of modeling it is proposed to reduce melting of powder particles out of the scanning area.

  8. A General Semiparametric Hazards Regression Model: Efficient Estimation and Structure Selection

    PubMed Central

    Tong, Xingwei; Zhu, Liang; Leng, Chenlei; Leisenring, Wendy; Robison, Leslie L.

    2014-01-01

    We consider a general semiparametric hazards regression model that encompasses Cox’s proportional hazards model and the accelerated failure time model for survival analysis. To overcome the nonexistence of the maximum likelihood, we derive a kernel-smoothed profile likelihood function, and prove that the resulting estimates of the regression parameters are consistent and achieve semiparametric efficiency. In addition, we develop penalized structure selection techniques to determine which covariates constitute the accelerate failure time model and which covariates constitute the proportional hazards model. The proposed method is able to estimate the model structure consistently and model parameters efficiently. Furthermore, variance estimation is straightforward. The proposed estimation performs well in simulation studies and is applied to the analysis of a real data set. Copyright PMID:23824784

  9. Development of the 1984-85 Validation Selection Criteria: The Eclectic Error Prone Model.

    ERIC Educational Resources Information Center

    Advanced Technology, Inc., Reston, VA.

    The development of the error prone model (EPM) for the 1984-1985 student financial aid validation criteria for Pell Grant recipient selection is discussed, based on a comparison of the 1983-1984 EPM criteria and a newly estimated EPM. Procedures/assumptions on which the new EPM was based include: a sample of 1982-1983 Pell Grant recipients…

  10. The Development of a Culturally Fair Model for the Early Identification and Selection of Gifted Children.

    ERIC Educational Resources Information Center

    Storlie, Theodore R.; And Others

    A two-stage model for early identification and selection of gifted children in kindergarden through grade 3 was successfully developed for the Walker Full-time Gifted Program in the Flint, Michigan Community Schools. Using the Nominative Group Process of interactive decision-making, project participants, school administrators, school…

  11. Mathematical model of motion of a mixture of gases and hollow microspheres with selective permeability

    NASA Astrophysics Data System (ADS)

    Vereshchagin, A. S.; Fomin, V. M.

    2015-09-01

    A mathematical model of motion of solid particles with selective permeability and a mixture of moving gases is developed with the use of averaging principles of mechanics of multiphase media. The derived system of quasi-linear partial differential equations is studied for a particular one-dimensional isothermal case.

  12. A supplier-selection model with classification and joint replenishment of inventory items

    NASA Astrophysics Data System (ADS)

    Mohammaditabar, Davood; Hassan Ghodsypour, Seyed

    2016-06-01

    Since inventory costs are closely related to suppliers, many models in the literature have selected the suppliers and also allocated orders, simultaneously. Such models usually consider either a single inventory item or multiple inventory items which have independent holding and ordering costs. However, in practice, ordering multiple items from the same supplier leads to a reduction in ordering costs. This paper presents a model in capacity-constrained supplier-selection and order-allocation problem, which considers the joint replenishment of inventory items with a direct grouping approach. In such supplier-selection problems, the following items are considered: a fixed major ordering cost to each supplier, which is independent from the items in the order; a minor ordering cost for each item ordered to each supplier; and the inventory holding and purchasing costs. To solve the developed NP-hard problem, a simulated annealing algorithm was proposed and then compared to a modified genetic algorithm of the literature. The numerical example represented that the number of groups and selected suppliers were reduced when the major ordering cost increased in comparison to other costs. There were also more savings when the number of groups was determined by the model in comparison to predetermined number of groups or no grouping scenarios.

  13. Aggressive Adolescents in Residential Care: A Selective Review of Treatment Requirements and Models

    ERIC Educational Resources Information Center

    Knorth, Erik J.; Klomp, Martin; Van den Bergh, Peter M.; Noom, Marc J.

    2007-01-01

    This article presents a selective inventory of treatment methods of aggressive behavior. Special attention is paid to types of intervention that, according to research, are frequently used in Dutch residential youth care. These methods are based on (1) principles of (cognitive) behavior management and control, (2) the social competence model, and…

  14. On Selective Harvesting of an Inshore-Offshore Fishery: A Bioeconomic Model

    ERIC Educational Resources Information Center

    Purohit, D.; Chaudhuri, K. S.

    2004-01-01

    A bioeconomic model is developed for the selective harvesting of a single species, inshore-offshore fishery, assuming that the growth of the species is governed by the Gompertz law. The dynamical system governing the fishery is studied in depth; the local and global stability of its non-trivial steady state are examined. Existence of a bionomic…

  15. Evolution of female multiple mating: A quantitative model of the "sexually selected sperm" hypothesis.

    PubMed

    Bocedi, Greta; Reid, Jane M

    2015-01-01

    Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with "sexy-son" models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and "sexy-son" processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry.

  16. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    ERIC Educational Resources Information Center

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles Cutter's 3…

  17. The Effects of Selection Strategies for Bivariate Loglinear Smoothing Models on NEAT Equating Functions

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul W.

    2010-01-01

    In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…

  18. Evolution of female multiple mating: A quantitative model of the "sexually selected sperm" hypothesis.

    PubMed

    Bocedi, Greta; Reid, Jane M

    2015-01-01

    Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with "sexy-son" models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and "sexy-son" processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405

  19. A Cognitive Model of Document Use during a Research Project. Study I. Document Selection.

    ERIC Educational Resources Information Center

    Wang, Peiling; Soergel, Dagobert

    1998-01-01

    Proposes a model of document selection by real users of a bibliographic retrieval system. Reports on Part I of a longitudinal study of decision making on document use by academics (25 faculty and graduate students in Agricultural Economics). Examines what components are relevant to the users' decisions and what cognitive process may have occurred…

  20. Faculty Salary Equity: Issues in Regression Model Selection. AIR 1992 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Moore, Nelle

    This paper discusses the determination of college faculty salary inequity and identifies the areas in which human judgment must be used in order to conduct a statistical analysis of salary equity. In addition, it provides some informed guidelines for making those judgments. The paper provides a framework for selecting salary equity models, based…

  1. The Selection of Peer Models in Early Childhood Special Education Programs: Issue and Procedures.

    ERIC Educational Resources Information Center

    Rettig, Michael; McCarthy-Rettig, Kelly

    This paper discusses some considerations and recommendations regarding the selection of children to serve as peer models in early childhood special education classrooms. The problem of having too many potential children to consider and choose from, or not enough children to consider and choose from, is discussed and the importance of establishing…

  2. The Effect of the Model's Presence and of Negative Evidence on Infants' Selective Imitation

    ERIC Educational Resources Information Center

    Kiraly, Ildiko

    2009-01-01

    This study demonstrated selective "rational" imitation in infants in two testing conditions: in the presence or absence of the model during the response phase. In the study, 14-month-olds were more likely to imitate a tool-use behavior when a prior failed attempt emphasized the logical reason and relevance of introducing this novel means, making…

  3. Mathematical analysis and modeling of motion direction selectivity in the retina.

    PubMed

    Escobar, María-José; Pezo, Danilo; Orio, Patricio

    2013-11-01

    Motion detection is one of the most important and primitive computations performed by our visual system. Specifically in the retina, ganglion cells producing motion direction-selective responses have been addressed by different disciplines, such as mathematics, neurophysiology and computational modeling, since the beginnings of vision science. Although a number of studies have analyzed theoretical and mathematical considerations for such responses, a clear picture of the underlying cellular mechanisms is only recently emerging. In general, motion direction selectivity is based on a non-linear asymmetric computation inside a receptive field differentiating cell responses between preferred and null direction stimuli. To what extent can biological findings match these considerations? In this review, we outline theoretical and mathematical studies of motion direction selectivity, aiming to map the properties of the models onto the neural circuitry and synaptic connectivity found in the retina. Additionally, we review several compartmental models that have tried to fill this gap. Finally, we discuss the remaining challenges that computational models will have to tackle in order to fully understand the retinal motion direction-selective circuitry.

  4. A model for selecting assessment methods for evaluating medical students in African medical schools.

    PubMed

    Walubo, Andrew; Burch, Vanessa; Parmar, Paresh; Raidoo, Deshandra; Cassimjee, Mariam; Onia, Rudy; Ofei, Francis

    2003-09-01

    Introduction of more effective and standardized assessment methods for testing students' performance in Africa's medical institutions has been hampered by severe financial and personnel shortages. Nevertheless, some African institutions have recognized the problem and are now revising their medical curricula, and, therefore, their assessment methods. These institutions, and those yet to come, need guidance on selecting assessment methods so as to adopt models that can be sustained locally. The authors provide a model for selecting assessment methods for testing medical students' performance in African medical institutions. The model systematically evaluates factors that influence implementation of an assessment method. Six commonly used methods (the essay examinations, short-answer questions, multiple-choice questions, patient-based clinical examination, problem-based oral examination [POE], and objective structured clinical examination) are evaluated by scoring and weighting against performance, cost, suitability, and safety factors. In the model, the highest score identifies the most appropriate method. Selection of an assessment method is illustrated using two institutional models, one depicting an ideal situation in which the objective structured clinical examination was preferred, and a second depicting the typical African scenario in which the essay and short-answer-question examinations were best. The POE method received the highest score and could be recommended as the most appropriate for Africa's medical institutions, but POE assessments require changing the medical curricula to a problem-based learning approach. The authors' model is easy to understand and promotes change in the medical curriculum and method of student assessment.

  5. A probabilistic union model with automatic order selection for noisy speech recognition.

    PubMed

    Jancovic, P; Ming, J

    2001-09-01

    A critical issue in exploiting the potential of the sub-band-based approach to robust speech recognition is the method of combining the sub-band observations, for selecting the bands unaffected by noise. A new method for this purpose, i.e., the probabilistic union model, was recently introduced. This model has been shown to be capable of dealing with band-limited corruption, requiring no knowledge about the band position and statistical distribution of the noise. A parameter within the model, which we call its order, gives the best results when it equals the number of noisy bands. Since this information may not be available in practice, in this paper we introduce an automatic algorithm for selecting the order, based on the state duration pattern generated by the hidden Markov model (HMM). The algorithm has been tested on the TIDIGITS database corrupted by various types of additive band-limited noise with unknown noisy bands. The results have shown that the union model equipped with the new algorithm can achieve a recognition performance similar to that achieved when the number of noisy bands is known. The results show a very significant improvement over the traditional full-band model, without requiring prior information on either the position or the number of noisy bands. The principle of the algorithm for selecting the order based on state duration may also be applied to other sub-band combination methods.

  6. Pitfalls of hypothesis tests and model selection on bootstrap samples: Causes and consequences in biometrical applications.

    PubMed

    Janitza, Silke; Binder, Harald; Boulesteix, Anne-Laure

    2016-05-01

    The bootstrap method has become a widely used tool applied in diverse areas where results based on asymptotic theory are scarce. It can be applied, for example, for assessing the variance of a statistic, a quantile of interest or for significance testing by resampling from the null hypothesis. Recently, some approaches have been proposed in the biometrical field where hypothesis testing or model selection is performed on a bootstrap sample as if it were the original sample. P-values computed from bootstrap samples have been used, for example, in the statistics and bioinformatics literature for ranking genes with respect to their differential expression, for estimating the variability of p-values and for model stability investigations. Procedures which make use of bootstrapped information criteria are often applied in model stability investigations and model averaging approaches as well as when estimating the error of model selection procedures which involve tuning parameters. From the literature, however, there is evidence that p-values and model selection criteria evaluated on bootstrap data sets do not represent what would be obtained on the original data or new data drawn from the overall population. We explain the reasons for this and, through the use of a real data set and simulations, we assess the practical impact on procedures relevant to biometrical applications in cases where it has not yet been studied. Moreover, we investigate the behavior of subsampling (i.e., drawing from a data set without replacement) as a potential alternative solution to the bootstrap for these procedures.

  7. Estimation and Model Selection for Finite Mixtures of Latent Interaction Models

    ERIC Educational Resources Information Center

    Hsu, Jui-Chen

    2011-01-01

    Latent interaction models and mixture models have received considerable attention in social science research recently, but little is known about how to handle if unobserved population heterogeneity exists in the endogenous latent variables of the nonlinear structural equation models. The current study estimates a mixture of latent interaction…

  8. Encapsulation of a Decision-Making Model to Optimize Supplier Selection via Structural Equation Modeling (SEM)

    NASA Astrophysics Data System (ADS)

    Sahul Hameed, Ruzanna; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Ezanee Rusli, Mohd; Yong, Lee Choon; Ghazali, Azrul; Itam, Zarina; Hakimie, Hazlinda; Beddu, Salmia; Liyana Mohd Kamal, Nur

    2016-03-01

    This paper proposes a conceptual framework to compare criteria/factor that influence the supplier selection. A mixed methods approach comprising qualitative and quantitative survey will be used. The study intend to identify and define the metrics that key stakeholders at Public Works Department (PWD) believed should be used for supplier. The outcomes would foresee the possible initiatives to bring procurement in PWD to a strategic level. The results will provide a deeper understanding of drivers for supplier’s selection in the construction industry. The obtained output will benefit many parties involved in the supplier selection decision-making. The findings provides useful information and greater understanding of the perceptions that PWD executives hold regarding supplier selection and the extent to which these perceptions are consistent with findings from prior studies. The findings from this paper can be utilized as input for policy makers to outline any changes in the current procurement code of practice in order to enhance the degree of transparency and integrity in decision-making.

  9. Modeling of inhibitor-metalloenzyme interactions and selectivity using molecular mechanics grounded in quantum chemistry.

    PubMed

    Garmer, D R; Gresh, N; Roques, B P

    1998-04-01

    We investigated the binding properties of the metalloprotease inhibitors hydroxamate, methanethiolate, and methylphosphoramidate to a model coordination site occurring in several Zn2+ metalloproteases, including thermolysin. This was carried out using both the SIBFA (sum of interactions between fragments ab initio-computed) molecular mechanics and the SCF/MP2 procedures for the purpose of evaluating SIBFA as a metalloenzyme modeling tool. The energy-minimized structures were closely similar to the X-ray crystallographic structures of related thermolysin-inhibitor complexes. We found that selectivity between alternative geometries and between inhibitors usually stemmed from multiple interaction components included in SIBFA. The binding strength sequence is hydroxamate > methanethiolate > or = methylphosphoramidate from multiple interaction components included in SIBFA. The trends in interaction energy components, rankings, and preferences for mono- or bidentate binding were consistent in both computational procedures. We also compared the Zn2+ vs. Mg2+ selectivities in several other polycoordinated sites having various "hard" and "soft" qualities. This included a hexahydrate, a model representing Mg2+/Ca2+ binding sites, a chlorophyll-like structure, and a zinc finger model. The latter three favor Zn2+ over Mg2+ by a greater degree than the hydrated state, but the selectivity varies widely according to the ligand "softness." SIBFA was able to match the ab initio binding energies by < 2%, with the SIBFA terms representing dispersion and charge-transfer contributing the most to Zn2+/Mg2+ selectivity. These results showed this procedure to be a very capable modeling tool for metalloenzyme problems, in this case giving valuable information about details and limitations of "hard" and "soft" selectivity trends.

  10. Model selection for system identification by means of artificial neural networks

    NASA Astrophysics Data System (ADS)

    Neuner, Hans

    2012-11-01

    System identification is one main task in modern deformation analysis. If the physical structure of the monitoring object is unknown or not accessible the system identification is performed in a behavioural framework. Therein the relations between input and output signals are formulated on the basis of regression models. Artificial neural networks (ANN) are a very flexible tool for modelling especially non-linear relationships between the input and the output measures. The universal approximation theorem ensures that every continuous relation can be modelled with this approach. However, some structural aspects of the ANN-based models, like the number of hidden nodes or the number of data needed to obtain a good generalisation, remain unspecified in the theorem. Therefore, one faces a model selection problem. In this article the methodology of modelling the deformations of a lock occurring due to water level and temperature changes is described. We emphasise the aspect of model selection, by presenting and discussing the results of various approaches for the determination of the number of hidden nodes. The first one is cross-validation. The second one is a weight deletion technique based on the exact computation of the Hessian matrix. Finally, the third method has a rigorous theoretical background and is based on the capacity concept of a model structure.

  11. A two-temperature model for selective photothermolysis laser treatment of port wine stains

    PubMed Central

    Li, D; Wang, G X; He, Y L; Kelly, K M; Wu, W J; Wang, Y X; Ying, Z X

    2014-01-01

    Selective photothermolysis is the basic principle for laser treatment of vascular malformations such as port wine stain birthmarks (PWS). During cutaneous laser surgery, blood inside blood vessels is heated due to selective absorption of laser energy, while the surrounding normal tissue is spared. As a result, the blood and the surrounding tissue experience a local thermodynamic non-equilibrium condition. Traditionally, the PWS laser treatment process was simulated by a discrete-blood-vessel model that simplifies blood vessels into parallel cylinders buried in a multi-layer skin model. In this paper, PWS skin is treated as a porous medium made of tissue matrix and blood in the dermis. A two-temperature model is constructed following the local thermal non-equilibrium theory of porous media. Both transient and steady heat conduction problems are solved in a unit cell for the interfacial heat transfer between blood vessels and the surrounding tissue to close the present two-temperature model. The present two-temperature model is validated by good agreement with those from the discrete-blood-vessel model. The characteristics of the present two-temperature model are further illustrated through a comparison with the previously-used homogenous model, in which a local thermodynamic equilibrium assumption between the blood and the surrounding tissue is employed. PMID:25110458

  12. Potential roles of the interaction between model V1 neurons with orientation-selective and non-selective surround inhibition in contour detection

    PubMed Central

    Yang, Kai-Fu; Li, Chao-Yi; Li, Yong-Jie

    2015-01-01

    Both the neurons with orientation-selective and with non-selective surround inhibition have been observed in the primary visual cortex (V1) of primates and cats. Though the inhibition coming from the surround region (named as non-classical receptive field, nCRF) has been considered playing critical role in visual perception, the specific role of orientation-selective and non-selective inhibition in the task of contour detection is less known. To clarify above question, we first carried out computational analysis of the contour detection performance of V1 neurons with different types of surround inhibition, on the basis of which we then proposed two integrated models to evaluate their role in this specific perceptual task by combining the two types of surround inhibition with two different ways. The two models were evaluated with synthetic images and a set of challenging natural images, and the results show that both of the integrated models outperform the typical models with orientation-selective or non-selective inhibition alone. The findings of this study suggest that V1 neurons with different types of center–surround interaction work in cooperative and adaptive ways at least when extracting organized structures from cluttered natural scenes. This work is expected to inspire efficient phenomenological models for engineering applications in field of computational machine-vision. PMID:26136664

  13. The effect of synaptic plasticity on orientation selectivity in a balanced model of primary visual cortex

    PubMed Central

    Gonzalo Cogno, Soledad; Mato, Germán

    2015-01-01

    Orientation selectivity is ubiquitous in the primary visual cortex (V1) of mammals. In cats and monkeys, V1 displays spatially ordered maps of orientation preference. Instead, in mice, squirrels, and rats, orientation selective neurons in V1 are not spatially organized, giving rise to a seemingly random pattern usually referred to as a salt-and-pepper layout. The fact that such different organizations can sharpen orientation tuning leads to question the structural role of the intracortical connections; specifically the influence of plasticity and the generation of functional connectivity. In this work, we analyze the effect of plasticity processes on orientation selectivity for both scenarios. We study a computational model of layer 2/3 and a reduced one-dimensional model of orientation selective neurons, both in the balanced state. We analyze two plasticity mechanisms. The first one involves spike-timing dependent plasticity (STDP), while the second one considers the reconnection of the interactions according to the preferred orientations of the neurons. We find that under certain conditions STDP can indeed improve selectivity but it works in a somehow unexpected way, that is, effectively decreasing the modulated part of the intracortical connectivity as compared to the non-modulated part of it. For the reconnection mechanism we find that increasing functional connectivity leads, in fact, to a decrease in orientation selectivity if the network is in a stable balanced state. Both counterintuitive results are a consequence of the dynamics of the balanced state. We also find that selectivity can increase due to a reconnection process if the resulting connections give rise to an unstable balanced state. We compare these findings with recent experimental results. PMID:26347615

  14. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-01-01

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection. PMID:27173301

  15. Simultaneous selection for cowpea (Vigna unguiculata L.) genotypes with adaptability and yield stability using mixed models.

    PubMed

    Torres, F E; Teodoro, P E; Rodrigues, E V; Santos, A; Corrêa, A M; Ceccon, G

    2016-04-29

    The aim of this study was to select erect cowpea (Vigna unguiculata L.) genotypes simultaneously for high adaptability, stability, and yield grain in Mato Grosso do Sul, Brazil using mixed models. We conducted six trials of different cowpea genotypes in 2005 and 2006 in Aquidauana, Chapadão do Sul, Dourados, and Primavera do Leste. The experimental design was randomized complete blocks with four replications and 20 genotypes. Genetic parameters were estimated by restricted maximum likelihood/best linear unbiased prediction, and selection was based on the harmonic mean of the relative performance of genetic values method using three strategies: selection based on the predicted breeding value, having considered the performance mean of the genotypes in all environments (no interaction effect); the performance in each environment (with an interaction effect); and the simultaneous selection for grain yield, stability, and adaptability. The MNC99542F-5 and MNC99-537F-4 genotypes could be grown in various environments, as they exhibited high grain yield, adaptability, and stability. The average heritability of the genotypes was moderate to high and the selective accuracy was 82%, indicating an excellent potential for selection.

  16. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection.

    PubMed

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments. PMID:26640481

  17. Selective binding of lectins to normal and neoplastic urothelium in rat and mouse bladder carcinogenesis models.

    PubMed

    Zupančič, Daša; Kreft, Mateja Erdani; Romih, Rok

    2014-01-01

    Bladder cancer adjuvant intravesical therapy could be optimized by more selective targeting of neoplastic tissue via specific binding of lectins to plasma membrane carbohydrates. Our aim was to establish rat and mouse models of bladder carcinogenesis to investigate in vivo and ex vivo binding of selected lectins to the luminal surface of normal and neoplastic urothelium. Male rats and mice were treated with 0.05 % N-butyl-N-(4-hydroxybutyl)nitrosamine (BBN) in drinking water and used for ex vivo and in vivo lectin binding experiments. Urinary bladder samples were also used for paraffin embedding, scanning electron microscopy and immunofluorescence labelling of uroplakins. During carcinogenesis, the structure of the urinary bladder luminal surface changed from microridges to microvilli and ropy ridges and the expression of urothelial-specific glycoproteins uroplakins was decreased. Ex vivo and in vivo lectin binding experiments gave comparable results. Jacalin (lectin from Artocarpus integrifolia) exhibited the highest selectivity for neoplastic compared to normal urothelium of rats and mice. The binding of lectin from Amaranthus caudatus decreased in rat model and increased in mouse carcinogenesis model, indicating interspecies variations of plasma membrane glycosylation. Lectin from Datura stramonium showed higher affinity for neoplastic urothelium compared to the normal in rat and mouse model. The BBN-induced animal models of bladder carcinogenesis offer a promising approach for lectin binding experiments and further lectin-mediated targeted drug delivery research. Moreover, in vivo lectin binding experiments are comparable to ex vivo experiments, which should be considered when planning and optimizing future research.

  18. A Biologically Inspired Computational Model of Basal Ganglia in Action Selection

    PubMed Central

    Baston, Chiara; Ursino, Mauro

    2015-01-01

    The basal ganglia (BG) are a subcortical structure implicated in action selection. The aim of this work is to present a new cognitive neuroscience model of the BG, which aspires to represent a parsimonious balance between simplicity and completeness. The model includes the 3 main pathways operating in the BG circuitry, that is, the direct (Go), indirect (NoGo), and hyperdirect pathways. The main original aspects, compared with previous models, are the use of a two-term Hebb rule to train synapses in the striatum, based exclusively on neuronal activity changes caused by dopamine peaks or dips, and the role of the cholinergic interneurons (affected by dopamine themselves) during learning. Some examples are displayed, concerning a few paradigmatic cases: action selection in basal conditions, action selection in the presence of a strong conflict (where the role of the hyperdirect pathway emerges), synapse changes induced by phasic dopamine, and learning new actions based on a previous history of rewards and punishments. Finally, some simulations show model working in conditions of altered dopamine levels, to illustrate pathological cases (dopamine depletion in parkinsonian subjects or dopamine hypermedication). Due to its parsimonious approach, the model may represent a straightforward tool to analyze BG functionality in behavioral experiments. PMID:26640481

  19. EXONEST: Bayesian model selection applied to the detection and characterization of exoplanets via photometric variations

    SciTech Connect

    Placek, Ben; Knuth, Kevin H.; Angerhausen, Daniel E-mail: kknuth@albany.edu

    2014-11-10

    EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian model selection, a unique aspect of EXONEST is the potential capability to distinguish between reflective and thermal contributions to the light curve. A case study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the nontransiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentricity.

  20. A Bayesian hierarchical model with spatial variable selection: the effect of weather on insurance claims

    PubMed Central

    Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth

    2013-01-01

    Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890

  1. QTL mapping in outbred half-sib families using Bayesian model selection.

    PubMed

    Fang, M; Liu, J; Sun, D; Zhang, Y; Zhang, Q; Zhang, Y; Zhang, S

    2011-09-01

    In this article, we propose a model selection method, the Bayesian composite model space approach, to map quantitative trait loci (QTL) in a half-sib population for continuous and binary traits. In our method, the identity-by-descent-based variance component model is used. To demonstrate the performance of this model, the method was applied to map QTL underlying production traits on BTA6 in a Chinese half-sib dairy cattle population. A total of four QTLs were detected, whereas only one QTL was identified using the traditional least square (LS) method. We also conducted two simulation experiments to validate the efficiency of our method. The results suggest that the proposed method based on a multiple-QTL model is efficient in mapping multiple QTL for an outbred half-sib population and is more powerful than the LS method based on a single-QTL model.

  2. Generalized additive modeling with implicit variable selection by likelihood-based boosting.

    PubMed

    Tutz, Gerhard; Binder, Harald

    2006-12-01

    The use of generalized additive models in statistical data analysis suffers from the restriction to few explanatory variables and the problems of selection of smoothing parameters. Generalized additive model boosting circumvents these problems by means of stagewise fitting of weak learners. A fitting procedure is derived which works for all simple exponential family distributions, including binomial, Poisson, and normal response variables. The procedure combines the selection of variables and the determination of the appropriate amount of smoothing. Penalized regression splines and the newly introduced penalized stumps are considered as weak learners. Estimates of standard deviations and stopping criteria, which are notorious problems in iterative procedures, are based on an approximate hat matrix. The method is shown to be a strong competitor to common procedures for the fitting of generalized additive models. In particular, in high-dimensional settings with many nuisance predictor variables it performs very well. PMID:17156269

  3. Models Used to Select Strategic Planning Experts for High Technology Productions

    NASA Astrophysics Data System (ADS)

    Zakharova, Alexandra A.; Grigorjeva, Antonina A.; Tseplit, Anna P.; Ozgogov, Evgenij V.

    2016-04-01

    The article deals with the problems and specific aspects in organizing works of experts involved in assessment of companies that manufacture complex high-technology products. A model is presented that is intended for evaluating competences of experts in individual functional areas of expertise. Experts are selected to build a group on the basis of tables used to determine a competence level. An expert selection model based on fuzzy logic is proposed and additional requirements for the expert group composition can be taken into account, with regard to the needed quality and competence related preferences of decision-makers. A Web-based information system model is developed for the interaction between experts and decision-makers when carrying out online examinations.

  4. Selective advantage of tolerant cultural traits in the Axelrod-Schelling model.

    PubMed

    Gracia-Lázaro, C; Floría, L M; Moreno, Y

    2011-05-01

    The Axelrod-Schelling model incorporates into the original Axelrod's model of cultural dissemination the possibility that cultural agents placed in culturally dissimilar environments move to other places, the strength of this mobility being controlled by an intolerance parameter. By allowing heterogeneity in the intolerance of cultural agents, and considering it as a cultural feature, i.e., susceptible of cultural transmission (thus breaking the original symmetry of Axelrod-Schelling dynamics), we address here the question of whether tolerant or intolerant traits are more likely to become dominant in the long-term cultural dynamics. Our results show that tolerant traits possess a clear selective advantage in the framework of the Axelrod-Schelling model. We show that the reason for this selective advantage is the development, as time evolves, of a positive correlation between the number of neighbors that an agent has in its environment and its tolerant character.

  5. Selective advantage of tolerant cultural traits in the Axelrod-Schelling model

    NASA Astrophysics Data System (ADS)

    Gracia-Lázaro, C.; Floría, L. M.; Moreno, Y.

    2011-05-01

    The Axelrod-Schelling model incorporates into the original Axelrod’s model of cultural dissemination the possibility that cultural agents placed in culturally dissimilar environments move to other places, the strength of this mobility being controlled by an intolerance parameter. By allowing heterogeneity in the intolerance of cultural agents, and considering it as a cultural feature, i.e., susceptible of cultural transmission (thus breaking the original symmetry of Axelrod-Schelling dynamics), we address here the question of whether tolerant or intolerant traits are more likely to become dominant in the long-term cultural dynamics. Our results show that tolerant traits possess a clear selective advantage in the framework of the Axelrod-Schelling model. We show that the reason for this selective advantage is the development, as time evolves, of a positive correlation between the number of neighbors that an agent has in its environment and its tolerant character.

  6. An Expression of Periodic Phenomena of Fashion on Sexual Selection Model with Conformity Genes and Memes

    NASA Astrophysics Data System (ADS)

    Mutoh, Atsuko; Tokuhara, Shinya; Kanoh, Masayoshi; Oboshi, Tamon; Kato, Shohei; Itoh, Hidenori

    It is generally thought that living things have trends in their preferences. The mechanism of occurrence of another trends in successive periods is concerned in their conformity. According to social impact theory, the minority is always exists in the group. There is a possibility that the minority make the transition to the majority by conforming agents. Because of agent's promotion of their conform actions, the majority can make the transition. We proposed an evolutionary model with both genes and memes, and elucidated the interaction between genes and memes on sexual selection. In this paper, we propose an agent model for sexual selection imported the concept of conformity. Using this model we try an environment where male agents and female agents are existed, we find that periodic phenomena of fashion are expressed. And we report the influence of conformity and differentiation on the transition of their preferences.

  7. Chain Pooling modeling selection as developed for the statistical analysis of a rotor burst protection experiment

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1977-01-01

    As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.

  8. Sample selection versus two-part models revisited: the case of female smoking and drinking.

    PubMed

    Madden, David

    2008-03-01

    There is a well-established debate between Heckman sample selection and two-part models in health econometrics, particularly when no obvious exclusion restrictions are available. Most of this debate has focussed on the application of these models to health care expenditure. This paper revisits the debate in the context of female smoking and drinking, and evaluates the two approaches on three grounds: theoretical, practical and statistical. The two-part model is generally favoured but it is stressed that this comparison should be carried out on a case-by-case basis.

  9. Selection of mortality rates and spatial structure in a host-disease model

    NASA Astrophysics Data System (ADS)

    Socolar, Joshua E. S.; Richards, Shane; Wilson, William

    2000-03-01

    A simple model of population dynamics with evolving hosts and rapidly spreading, fatal diseases is introduced. The model is of interest to ecologists for two reasons: (1) it demonstrates a novel kin selection mechanism that limits evolution towards greater longevity; and (2) spatial organization plays a crucial role in this mechanism. For statistical physicists, the model poses the challenge of accounting for the average mortality rate after many generations. An appropriate mean-field theory has been formulated for a 1-dimensional system, but the problem takes on a very different character in 2D, where numerical results indicate that the system evolves to a critical state.

  10. Coulomb blockade model of permeation and selectivity in biological ion channels

    NASA Astrophysics Data System (ADS)

    Kaufman, I. Kh; McClintock, P. V. E.; Eisenberg, R. S.

    2015-08-01

    Biological ion channels are protein nanotubes embedded in, and passing through, the bilipid membranes of cells. Physiologically, they are of crucial importance in that they allow ions to pass into and out of cells, fast and efficiently, though in a highly selective way. Here we show that the conduction and selectivity of calcium/sodium ion channels can be described in terms of ionic Coulomb blockade in a simplified electrostatic and Brownian dynamics model of the channel. The Coulomb blockade phenomenon arises from the discreteness of electrical charge, the strong electrostatic interaction, and an electrostatic exclusion principle. The model predicts a periodic pattern of Ca2+ conduction versus the fixed charge Qf at the selectivity filter (conduction bands) with a period equal to the ionic charge. It thus provides provisional explanations of some observed and modelled conduction and valence selectivity phenomena, including the anomalous mole fraction effect and the calcium conduction bands. Ionic Coulomb blockade and resonant conduction are similar to electronic Coulomb blockade and resonant tunnelling in quantum dots. The same considerations may also be applicable to other kinds of channel, as well as to charged artificial nanopores.

  11. Disentangling the formation of contrasting tree-line physiognomies combining model selection and Bayesian parameterization for simulation models.

    PubMed

    Martínez, Isabel; Wiegand, Thorsten; Camarero, J Julio; Batllori, Enric; Gutiérrez, Emilia

    2011-05-01

    Alpine tree-line ecotones are characterized by marked changes at small spatial scales that may result in a variety of physiognomies. A set of alternative individual-based models was tested with data from four contrasting Pinus uncinata ecotones in the central Spanish Pyrenees to reveal the minimal subset of processes required for tree-line formation. A Bayesian approach combined with Markov chain Monte Carlo methods was employed to obtain the posterior distribution of model parameters, allowing the use of model selection procedures. The main features of real tree lines emerged only in models considering nonlinear responses in individual rates of growth or mortality with respect to the altitudinal gradient. Variation in tree-line physiognomy reflected mainly changes in the relative importance of these nonlinear responses, while other processes, such as dispersal limitation and facilitation, played a secondary role. Different nonlinear responses also determined the presence or absence of krummholz, in agreement with recent findings highlighting a different response of diffuse and abrupt or krummholz tree lines to climate change. The method presented here can be widely applied in individual-based simulation models and will turn model selection and evaluation in this type of models into a more transparent, effective, and efficient exercise.

  12. Catalytic conversion reactions in nanoporous systems with concentration-dependent selectivity: Statistical mechanical modeling

    DOE PAGES

    Garcia, Andres; Wang, Jing; Windus, Theresa L.; Sadow, Aaron D.; Evans, James W.

    2016-05-20

    Statistical mechanical modeling is developed to describe a catalytic conversion reaction A → Bc or Bt with concentration-dependent selectivity of the products, Bc or Bt, where reaction occurs inside catalytic particles traversed by narrow linear nanopores. The associated restricted diffusive transport, which in the extreme case is described by single-file diffusion, naturally induces strong concentration gradients. Hence, by comparing kinetic Monte Carlo simulation results with analytic treatments, selectivity is shown to be impacted by strong spatial correlations induced by restricted diffusivity in the presence of reaction and also by a subtle clustering of reactants, A.

  13. Catalytic conversion reactions in nanoporous systems with concentration-dependent selectivity: Statistical mechanical modeling.

    PubMed

    García, Andrés; Wang, Jing; Windus, Theresa L; Sadow, Aaron D; Evans, James W

    2016-05-01

    Statistical mechanical modeling is developed to describe a catalytic conversion reaction A→B^{c} or B^{t} with concentration-dependent selectivity of the products, B^{c} or B^{t}, where reaction occurs inside catalytic particles traversed by narrow linear nanopores. The associated restricted diffusive transport, which in the extreme case is described by single-file diffusion, naturally induces strong concentration gradients. Furthermore, by comparing kinetic Monte Carlo simulation results with analytic treatments, selectivity is shown to be impacted by strong spatial correlations induced by restricted diffusivity in the presence of reaction and also by a subtle clustering of reactants, A.

  14. Catalytic conversion reactions in nanoporous systems with concentration-dependent selectivity: Statistical mechanical modeling

    NASA Astrophysics Data System (ADS)

    García, Andrés; Wang, Jing; Windus, Theresa L.; Sadow, Aaron D.; Evans, James W.

    2016-05-01

    Statistical mechanical modeling is developed to describe a catalytic conversion reaction A →Bc or Bt with concentration-dependent selectivity of the products, Bc or Bt, where reaction occurs inside catalytic particles traversed by narrow linear nanopores. The associated restricted diffusive transport, which in the extreme case is described by single-file diffusion, naturally induces strong concentration gradients. Furthermore, by comparing kinetic Monte Carlo simulation results with analytic treatments, selectivity is shown to be impacted by strong spatial correlations induced by restricted diffusivity in the presence of reaction and also by a subtle clustering of reactants, A .

  15. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  16. An Optimization Model for the Selection of Bus-Only Lanes in a City

    PubMed Central

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers’ route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model. PMID:26214001

  17. Impacts of land cover data selection and trait parameterisation on dynamic modelling of species' range expansion.

    PubMed

    Heikkinen, Risto K; Bocedi, Greta; Kuussaari, Mikko; Heliölä, Janne; Leikola, Niko; Pöyry, Juha; Travis, Justin M J

    2014-01-01

    Dynamic models for range expansion provide a promising tool for assessing species' capacity to respond to climate change by shifting their ranges to new areas. However, these models include a number of uncertainties which may affect how successfully they can be applied to climate change oriented conservation planning. We used RangeShifter, a novel dynamic and individual-based modelling platform, to study two potential sources of such uncertainties: the selection of land cover data and the parameterization of key life-history traits. As an example, we modelled the range expansion dynamics of two butterfly species, one habitat specialist (Maniola jurtina) and one generalist (Issoria lathonia). Our results show that projections of total population size, number of occupied grid cells and the mean maximal latitudinal range shift were all clearly dependent on the choice made between using CORINE land cover data vs. using more detailed grassland data from three alternative national databases. Range expansion was also sensitive to the parameterization of the four considered life-history traits (magnitude and probability of long-distance dispersal events, population growth rate and carrying capacity), with carrying capacity and magnitude of long-distance dispersal showing the strongest effect. Our results highlight the sensitivity of dynamic species population models to the selection of existing land cover data and to uncertainty in the model parameters and indicate that these need to be carefully evaluated before the models are applied to conservation planning. PMID:25265281

  18. Commentary on Factorial versus Typological Models: Complementary Evidence in the Model Selection Process

    ERIC Educational Resources Information Center

    Samuelsen, Karen

    2012-01-01

    The notion that there is often no clear distinction between factorial and typological models (von Davier, Naemi, & Roberts, this issue) is sound. As von Davier et al. state, theory often indicates a preference between these models; however the statistical criteria by which these are delineated offer much less clarity. In many ways the procedure…

  19. Selecting a CSR Model: Quality and Implications of the Model Adoption Process

    ERIC Educational Resources Information Center

    Le Floch, Kerstin Carlson; Zhang, Yu; Kurki, Anja; Herrmann, Suzannah

    2006-01-01

    The process through which a school adopts a comprehensive school reform (CSR) model has been suggested to be a key element in the lifecycle of school reform, contributing to stakeholder buy in and subsequent implementation. We studied the model adoption process, both on a national scale with survey data and in more depth with qualitative case…

  20. Core-scale solute transport model selection using Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.

    2013-06-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.

  1. An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection

    PubMed Central

    Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail

    2013-01-01

    One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This

  2. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  3. Geological feature selection in reservoir modelling and history matching with Multiple Kernel Learning

    NASA Astrophysics Data System (ADS)

    Demyanov, V.; Backhouse, L.; Christie, M.

    2015-12-01

    There is a continuous challenge in identifying and propagating geologically realistic features into reservoir models. Many of the contemporary geostatistical algorithms are limited by various modelling assumptions, like stationarity or Gaussianity. Another related challenge is to ensure the realistic geological features introduced into a geomodel are preserved during the model update in history matching studies, when the model properties are tuned to fit the flow response to production data. The above challenges motivate exploration and application of other statistical approaches to build and calibrate reservoir models, in particular, methods based on statistical learning. The paper proposes a novel data driven approach - Multiple Kernel Learning (MKL) - for modelling porous property distributions in sub-surface reservoirs. Multiple Kernel Learning aims to extract relevant spatial features from spatial patterns and to combine them in a non-linear way. This ability allows to handle multiple geological scenarios, which represent different spatial scales and a range of modelling concepts/assumptions. Multiple Kernel Learning is not restricted by deterministic or statistical modelling assumptions and, therefore, is more flexible for modelling heterogeneity at different scales and integrating data and knowledge. We demonstrate an MKL application to a problem of history matching based on a diverse prior information embedded into a range of possible geological scenarios. MKL was able to select the most influential prior geological scenarios and fuse the selected spatial features into a multi-scale property model. The MKL was applied to Brugge history matching benchmark example by calibrating the parameters of the MKL reservoir model parameters to production data. The history matching results were compared to the ones obtained from other contemporary approaches - EnKF and kernel PCA with stochastic optimisation.

  4. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    USGS Publications Warehouse

    Roffler, Gretchen H.; Schwartz, Michael K.; Pilgrim, Kristy L.; Talbot, Sandra; Sage, Kevin; Adams, Layne G.; Luikart, Gordon

    2016-01-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is important to consider explicit assumptions and spatial scales of measurement. We calculated pairwise genetic distance among 301 Dall's sheep (Ovis dalli dalli) in southcentral Alaska using an intensive noninvasive sampling effort and 15 microsatellite loci. We used multiple regression of distance matrices to assess the correlation of pairwise genetic distance and landscape resistance derived from an RSF, and combinations of landscape features hypothesized to influence dispersal. Dall's sheep gene flow was positively correlated with steep slopes, moderate peak normalized difference vegetation indices (NDVI), and open land cover. Whereas RSF covariates were significant in predicting genetic distance, the RSF model itself was not significantly correlated with Dall's sheep gene flow, suggesting that certain habitat features important during summer (rugged terrain, mid-range elevation) were not influential to effective dispersal. This work underscores that consideration of both habitat selection and landscape genetics models may be useful in developing management strategies to both meet the immediate survival of a species and allow for long-term genetic connectivity.

  5. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  6. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    PubMed

    Roffler, Gretchen H; Schwartz, Michael K; Pilgrim, Kristy L; Talbot, Sandra L; Sage, George K; Adams, Layne G; Luikart, Gordon

    2016-07-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is important to consider explicit assumptions and spatial scales of measurement. We calculated pairwise genetic distance among 301 Dall's sheep (Ovis dalli dalli) in southcentral Alaska using an intensive noninvasive sampling effort and 15 microsatellite loci. We used multiple regression of distance matrices to assess the correlation of pairwise genetic distance and landscape resistance derived from an RSF, and combinations of landscape features hypothesized to influence dispersal. Dall's sheep gene flow was positively correlated with steep slopes, moderate peak normalized difference vegetation indices (NDVI), and open land cover. Whereas RSF covariates were significant in predicting genetic distance, the RSF model itself was not significantly correlated with Dall's sheep gene flow, suggesting that certain habitat features important during summer (rugged terrain, mid-range elevation) were not influential to effective dispersal. This work underscores that consideration of both habitat selection and landscape genetics models may be useful in developing management strategies to both meet the immediate survival of a species and allow for long-term genetic connectivity.

  7. A Dynamical Model of Hierarchical Selection and Coordination in Speech Planning

    PubMed Central

    Tilsen, Sam

    2013-01-01

    Studies of the control of complex sequential movements have dissociated two aspects of movement planning: control over the sequential selection of movement plans, and control over the precise timing of movement execution. This distinction is particularly relevant in the production of speech: utterances contain sequentially ordered words and syllables, but articulatory movements are often executed in a non-sequential, overlapping manner with precisely coordinated relative timing. This study presents a hybrid dynamical model in which competitive activation controls selection of movement plans and coupled oscillatory systems govern coordination. The model departs from previous approaches by ascribing an important role to competitive selection of articulatory plans within a syllable. Numerical simulations show that the model reproduces a variety of speech production phenomena, such as effects of preparation and utterance composition on reaction time, and asymmetries in patterns of articulatory timing associated with onsets and codas. The model furthermore provides a unified understanding of a diverse group of phonetic and phonological phenomena which have not previously been related. PMID:23638147

  8. Identification of landscape features influencing gene flow: How useful are habitat selection models?

    PubMed

    Roffler, Gretchen H; Schwartz, Michael K; Pilgrim, Kristy L; Talbot, Sandra L; Sage, George K; Adams, Layne G; Luikart, Gordon

    2016-07-01

    Understanding how dispersal patterns are influenced by landscape heterogeneity is critical for modeling species connectivity. Resource selection function (RSF) models are increasingly used in landscape genetics approaches. However, because the ecological factors that drive habitat selection may be different from those influencing dispersal and gene flow, it is important to consider explicit assumptions and spatial scales of measurement. We calculated pairwise genetic distance among 301 Dall's sheep (Ovis dalli dalli) in southcentral Alaska using an intensive noninvasive sampling effort and 15 microsatellite loci. We used multiple regression of distance matrices to assess the correlation of pairwise genetic distance and landscape resistance derived from an RSF, and combinations of landscape features hypothesized to influence dispersal. Dall's sheep gene flow was positively correlated with steep slopes, moderate peak normalized difference vegetation indices (NDVI), and open land cover. Whereas RSF covariates were significant in predicting genetic distance, the RSF model itself was not significantly correlated with Dall's sheep gene flow, suggesting that certain habitat features important during summer (rugged terrain, mid-range elevation) were not influential to effective dispersal. This work underscores that consideration of both habitat selection and landscape genetics models may be useful in developing management strategies to both meet the immediate survival of a species and allow for long-term genetic connectivity. PMID:27330556

  9. Highly Selective Salicylketoxime-Based Estrogen Receptor β Agonists Display Antiproliferative Activities in a Glioma Model

    PubMed Central

    2016-01-01

    Estrogen receptor β (ERβ) selective agonists are considered potential therapeutic agents for a variety of pathological conditions, including several types of cancer. Their development is particularly challenging, since differences in the ligand binding cavities of the two ER subtypes α and β are minimal. We have carried out a rational design of new salicylketoxime derivatives which display unprecedentedly high levels of ERβ selectivity for this class of compounds, both in binding affinity and in cell-based functional assays. An endogenous gene expression assay was used to further characterize the pharmacological action of these compounds. Finally, these ERβ-selective agonists were found to inhibit proliferation of a glioma cell line in vitro. Most importantly, one of these compounds also proved to be active in an in vivo xenograft model of human glioma, thus demonstrating the high potential of this type of compounds against this devastating disease. PMID:25559213

  10. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    NASA Technical Reports Server (NTRS)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  11. Forward-in-Time, Spatially Explicit Modeling Software to Simulate Genetic Lineages Under Selection

    PubMed Central

    Currat, Mathias; Gerbault, Pascale; Di, Da; Nunes, José M.; Sanchez-Mazas, Alicia

    2015-01-01

    SELECTOR is a software package for studying the evolution of multiallelic genes under balancing or positive selection while simulating complex evolutionary scenarios that integrate demographic growth and migration in a spatially explicit population framework. Parameters can be varied both in space and time to account for geographical, environmental, and cultural heterogeneity. SELECTOR can be used within an approximate Bayesian computation estimation framework. We first describe the principles of SELECTOR and validate the algorithms by comparing its outputs for simple models with theoretical expectations. Then, we show how it can be used to investigate genetic differentiation of loci under balancing selection in interconnected demes with spatially heterogeneous gene flow. We identify situations in which balancing selection reduces genetic differentiation between population groups compared with neutrality and explain conflicting outcomes observed for human leukocyte antigen loci. These results and three previously published applications demonstrate that SELECTOR is efficient and robust for building insight into human settlement history and evolution. PMID:26949332

  12. Consideration in selecting crops for the human-rated life support system: a Linear Programming model

    NASA Technical Reports Server (NTRS)

    Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.

  13. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    PubMed Central

    Gabere, Musa Nur; Hussein, Mohamed Aly; Aziz, Mohammad Azhar

    2016-01-01

    Purpose There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC). The selection of important features is a crucial step before training a classifier. Methods In this study, we built a model that uses support vector machine (SVM) to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR) technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid). Results The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF), Bayes net (BN), multilayer perceptron (MLP), naïve Bayes (NB), reduced error pruning tree (REPT), and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP). Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1, MMP7, and TGFB1 were predicted to be CRC biomarkers. Conclusion This model could be used to further develop a diagnostic tool for predicting CRC based on gene expression data from patient samples. PMID:27330311

  14. SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH

    EPA Science Inventory

    While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...

  15. Modulation Depth Estimation and Variable Selection in State-Space Models for Neural Interfaces

    PubMed Central

    Hochberg, Leigh R.; Donoghue, John P.; Brown, Emery N.

    2015-01-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID

  16. Modulation depth estimation and variable selection in state-space models for neural interfaces.

    PubMed

    Malik, Wasim Q; Hochberg, Leigh R; Donoghue, John P; Brown, Emery N

    2015-02-01

    Rapid developments in neural interface technology are making it possible to record increasingly large signal sets of neural activity. Various factors such as asymmetrical information distribution and across-channel redundancy may, however, limit the benefit of high-dimensional signal sets, and the increased computational complexity may not yield corresponding improvement in system performance. High-dimensional system models may also lead to overfitting and lack of generalizability. To address these issues, we present a generalized modulation depth measure using the state-space framework that quantifies the tuning of a neural signal channel to relevant behavioral covariates. For a dynamical system, we develop computationally efficient procedures for estimating modulation depth from multivariate data. We show that this measure can be used to rank neural signals and select an optimal channel subset for inclusion in the neural decoding algorithm. We present a scheme for choosing the optimal subset based on model order selection criteria. We apply this method to neuronal ensemble spike-rate decoding in neural interfaces, using our framework to relate motor cortical activity with intended movement kinematics. With offline analysis of intracortical motor imagery data obtained from individuals with tetraplegia using the BrainGate neural interface, we demonstrate that our variable selection scheme is useful for identifying and ranking the most information-rich neural signals. We demonstrate that our approach offers several orders of magnitude lower complexity but virtually identical decoding performance compared to greedy search and other selection schemes. Our statistical analysis shows that the modulation depth of human motor cortical single-unit signals is well characterized by the generalized Pareto distribution. Our variable selection scheme has wide applicability in problems involving multisensor signal modeling and estimation in biomedical engineering systems. PMID

  17. Selecting an interprofessional education model for a tertiary health care setting.

    PubMed

    Menard, Prudy; Varpio, Lara

    2014-07-01

    The World Health Organization describes interprofessional education (IPE) and collaboration as necessary components of all health professionals' education - in curriculum and in practice. However, no standard framework exists to guide healthcare settings in developing or selecting an IPE model that meets the learning needs of licensed practitioners in practice and that suits the unique needs of their setting. Initially, a broad review of the grey literature (organizational websites, government documents and published books) and healthcare databases was undertaken for existing IPE models. Subsequently, database searches of published papers using Scopus, Scholars Portal and Medline was undertaken. Through this search process five IPE models were identified in the literature. This paper attempts to: briefly outline the five different models of IPE that are presently offered in the literature; and illustrate how a healthcare setting can select the IPE model within their context using Reeves' seven key trends in developing IPE. In presenting these results, the paper contributes to the interprofessional literature by offering an overview of possible IPE models that can be used to inform the implementation or modification of interprofessional practices in a tertiary healthcare setting. PMID:24678579

  18. Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred; Volden, Thomas R.

    2010-01-01

    The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.

  19. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    SciTech Connect

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidate inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.

  20. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE PAGES

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  1. Stochastic approach to reconstruction of dynamical systems: optimal model selection criterion

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E. M.; Feigin, A. M.

    2011-12-01

    Most of known observable systems are complex and high-dimensional that doesn't allow to make the exact long-term forecast of their behavior. The stochastic approach to reconstruction of such systems gives a hope to describe important qualitative features of their behavior in a low-dimensional way while all other dynamics is modelled as stochastic disturbance. This report is devoted to application of Bayesian evidence for optimal stochastic model selection when reconstructing the evolution operator of observable system. The idea of Bayesian evidence is to find compromise between the model predictiveness and quality of fitting the model into the data. We represent the evolution operator of investigated system in a form of random dynamic system including deterministic and stochastic parts, both parameterized by artificial neural network. Then we use Bayesian evidence criterion to estimate optimal complexity of the model, i.e. both number of parameters and dimension corresponding to most probable model given the data. We demonstrate on the number of model examples that the model with non-uniformly distributed stochastic part (which corresponds to non-Gaussian perturbations of evolution operator) is optimal in general case. Further, we show that simple stochastic model can be the most preferred for reconstruction of the evolution operator underlying complex observed dynamics even in a case of deterministic high-dimensional system. Workability of suggested approach for modeling and prognosis of real-measured geophysical dynamics is investigated.

  2. Geographic selection bias of occurrence data influences transferability of invasive Hydrilla verticillata distribution models

    PubMed Central

    Barnes, Matthew A; Jerde, Christopher L; Wittmann, Marion E; Chadderton, W Lindsay; Ding, Jianqing; Zhang, Jialiang; Purcell, Matthew; Budhathoki, Milan; Lodge, David M

    2014-01-01

    Due to socioeconomic differences, the accuracy and extent of reporting on the occurrence of native species differs among countries, which can impact the performance of species distribution models. We assessed the importance of geographical biases in occurrence data on model performance using Hydrilla verticillata as a case study. We used Maxent to predict potential North American distribution of the aquatic invasive macrophyte based upon training data from its native range. We produced a model using all available native range occurrence data, then explored the change in model performance produced by omitting subsets of training data based on political boundaries. We also compared those results with models trained on data from which a random sample of occurrence data was omitted from across the native range. Although most models accurately predicted the occurrence of H. verticillata in North America (AUC > 0.7600), data omissions influenced model predictions. Omitting data based on political boundaries resulted in larger shifts in model accuracy than omitting randomly selected occurrence data. For well-documented species like H. verticillata, missing records from single countries or ecoregions may minimally influence model predictions, but for species with fewer documented occurrences or poorly understood ranges, geographic biases could misguide predictions. Regardless of focal species, we recommend that future species distribution modeling efforts begin with a reflection on potential spatial biases of available occurrence data. Improved biodiversity surveillance and reporting will provide benefit not only in invaded ranges but also within under-reported and unexplored native ranges. PMID:25360288

  3. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    PubMed

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  4. Experiment and modeling of exit-selecting behaviors during a building evacuation

    NASA Astrophysics Data System (ADS)

    Fang, Zhiming; Song, Weiguo; Zhang, Jun; Wu, Hao

    2010-02-01

    The evacuation process in a teaching building with two neighboring exits is investigated by means of experiment and modeling. The basic parameters such as flow, density and velocity of pedestrians in the exit area are measured. The exit-selecting phenomenon in the experiment is analyzed, and it is found that pedestrians prefer selecting the closer exit even though the other exit is only a little far. In order to understand the phenomenon, we reproduce the experiment process with a modified biased random walk model, in which the preference of closer exit is achieved using the drift direction and the drift force. Our simulation results afford a calibrated value of the drift force, especially when it is 0.56, there is good agreement between the simulation results and the experimental results on the number of pedestrians selecting the closer exit, the average velocity through the exits, the cumulative distribution of the instantaneous velocity and the fundamental diagram of the flow through exits. According to the further simulation results, it is found that pedestrians tend to select the exit with shorter distance to them, especially when the people density is small or medium. But if the density is large enough, the flow rates of the two exits will become comparable because of the detour behaviors. It reflects the fact that a crowd of people may not be rational to optimize the usage of multi-exits, especially in an emergency.

  5. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis

    PubMed Central

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard’s Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments. PMID:26606388

  6. Identifying Loci Under Selection Against Gene Flow in Isolation-with-Migration Models

    PubMed Central

    Sousa, Vitor C.; Carneiro, Miguel; Ferrand, Nuno; Hey, Jody

    2013-01-01

    When divergence occurs in the presence of gene flow, there can arise an interesting dynamic in which selection against gene flow, at sites associated with population-specific adaptations or genetic incompatibilities, can cause net gene flow to vary across the genome. Loci linked to sites under selection may experience reduced gene flow and may experience genetic bottlenecks by the action of nearby selective sweeps. Data from histories such as these may be poorly fitted by conventional neutral model approaches to demographic inference, which treat all loci as equally subject to forces of genetic drift and gene flow. To allow for demographic inference in the face of such histories, as well as the identification of loci affected by selection, we developed an isolation-with-migration model that explicitly provides for variation among genomic regions in migration rates and/or rates of genetic drift. The method allows for loci to fall into any of multiple groups, each characterized by a different set of parameters, thus relaxing the assumption that all loci share the same demography. By grouping loci, the method can be applied to data with multiple loci and still have tractable dimensionality and statistical power. We studied the performance of the method using simulated data, and we applied the method to study the divergence of two subspecies of European rabbits (Oryctolagus cuniculus). PMID:23457232

  7. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    PubMed

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments. PMID:26606388

  8. Noise assisted excitation energy transfer in a linear model of a selectivity filter backbone strand.

    PubMed

    Bassereh, Hassan; Salari, Vahid; Shahbazi, Farhad

    2015-07-15

    In this paper, we investigate the effect of noise and disorder on the efficiency of excitation energy transfer (EET) in a N = 5 sites linear chain with 'static' dipole-dipole couplings. In fact, here, the disordered chain is a toy model for one strand of the selectivity filter backbone in ion channels. It has recently been discussed that the presence of quantum coherence in the selectivity filter is possible and can play a role in mediating ion-conduction and ion-selectivity in the selectivity filter. The question is 'how a quantum coherence can be effective in such structures while the environment of the channel is dephasing (i.e. noisy)?' Basically, we expect that the presence of the noise should have a destructive effect in the quantum transport. In fact, we show that such expectation is valid for ordered chains. However, our results indicate that introducing the dephasing in the disordered chains leads to the weakening of the localization effects, arising from the multiple back-scatterings due to the randomness, and then increases the efficiency of quantum energy transfer. Thus, the presence of noise is crucial for the enhancement of EET efficiency in disordered chains. We also show that the contribution of both classical and quantum mechanical effects are required to improve the speed of energy transfer along the chain. Our analysis may help for better understanding of fast and efficient functioning of the selectivity filters in ion channels.

  9. Selective of informative metabolites using random forests based on model population analysis.

    PubMed

    Huang, Jian-Hua; Yan, Jun; Wu, Qing-Hua; Duarte Ferro, Miguel; Yi, Lun-Zhao; Lu, Hong-Mei; Xu, Qing-Song; Liang, Yi-Zeng

    2013-12-15

    One of the main goals of metabolomics studies is to discover informative metabolites or biomarkers, which may be used to diagnose diseases and to find out pathology. Sophisticated feature selection approaches are required to extract the information hidden in such complex 'omics' data. In this study, it is proposed a new and robust selective method by combining random forests (RF) with model population analysis (MPA), for selecting informative metabolites from three metabolomic datasets. According to the contribution to the classification accuracy, the metabolites were classified into three kinds: informative, no-informative, and interfering metabolites. Based on the proposed method, some informative metabolites were selected for three datasets; further analyses of these metabolites between healthy and diseased groups were then performed, showing by T-test that the P values for all these selected metabolites were lower than 0.05. Moreover, the informative metabolites identified by the current method were demonstrated to be correlated with the clinical outcome under investigation. The source codes of MPA-RF in Matlab can be freely downloaded from http://code.google.com/p/my-research-list/downloads/list. PMID:24209380

  10. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  11. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  12. 45 CFR 2522.450 - What types of programs or program models may receive special consideration in the selection process?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... receive special consideration in the selection process? 2522.450 Section 2522.450 Public Welfare... PARTICIPANTS, PROGRAMS, AND APPLICANTS Selection of AmeriCorps Programs § 2522.450 What types of programs or program models may receive special consideration in the selection process? Following the scoring...

  13. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    NASA Astrophysics Data System (ADS)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  14. Selecting models for a respiratory protection program: What can we learn from the scientific literature?

    PubMed Central

    Shaffer, Ronald E.; Janssen, Larry L.

    2016-01-01

    Background An unbiased source of comparable respirator performance data would be helpful in setting up a hospital respiratory protection program. Methods The scientific literature was examined to assess the extent to which performance data (respirator fit, comfort and usability) from N95 filtering facepiece respirator (FFR) models are available to assist with FFR model selection and procurement decisions. Results Ten studies were identified that met the search criteria for fit, whereas 5 studies met the criteria for comfort and usability. Conclusion Analysis of these studies indicated that it is difficult to directly use the scientific literature to inform the FFR selection process because of differences in study populations, methodologies, and other factors. Although there does not appear to be a single best fitting FFR, studies demonstrate that fit testing programs can be designed to successfully fit nearly all workers with existing products. Comfort and usability are difficult to quantify. Among the studies found, no significant differences were noted. PMID:25499425

  15. Empirical Bayes ranking and selection methods via semiparametric hierarchical mixture models in microarray studies.

    PubMed

    Noma, Hisashi; Matsui, Shigeyuki

    2013-05-20

    The main purpose of microarray studies is screening of differentially expressed genes as candidates for further investigation. Because of limited resources in this stage, prioritizing genes are relevant statistical tasks in microarray studies. For effective gene selections, parametric empirical Bayes methods for ranking and selection of genes with largest effect sizes have been proposed (Noma et al., 2010; Biostatistics 11: 281-289). The hierarchical mixture model incorporates the differential and non-differential components and allows information borrowing across differential genes with separation from nuisance, non-differential genes. In this article, we develop empirical Bayes ranking methods via a semiparametric hierarchical mixture model. A nonparametric prior distribution, rather than parametric prior distributions, for effect sizes is specified and estimated using the "smoothing by roughening" approach of Laird and Louis (1991; Computational statistics and data analysis 12: 27-37). We present applications to childhood and infant leukemia clinical studies with microarrays for exploring genes related to prognosis or disease progression.

  16. An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics

    PubMed Central

    Gupta, Priti; Markan, C. M.

    2014-01-01

    The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID

  17. A data driven model for optimal orthosis selection in children with cerebral palsy.

    PubMed

    Ries, Andrew J; Novacheck, Tom F; Schwartz, Michael H

    2014-09-01

    A statistical orthosis selection model was developed using the Random Forest Algorithm (RFA). The model's performance and potential clinical benefit was evaluated. The model predicts which of five orthosis designs - solid (SAFO), posterior leaf spring (PLS), hinged (HAFO), supra-malleolar (SMO), or foot orthosis (FO) - will provide the best gait outcome for individuals with diplegic cerebral palsy (CP). Gait outcome was defined as the change in Gait Deviation Index (GDI) between walking while wearing an orthosis compared to barefoot (ΔGDI=GDIOrthosis-GDIBarefoot). Model development was carried out using retrospective data from 476 individuals who wore one of the five orthosis designs bilaterally. Clinical benefit was estimated by predicting the optimal orthosis and ΔGDI for 1016 individuals (age: 12.6 (6.7) years), 540 of whom did not have an existing orthosis prescription. Among limbs with an orthosis, the model agreed with the prescription only 14% of the time. For 56% of limbs without an orthosis, the model agreed that no orthosis was expected to provide benefit. Using the current standard of care orthosis (i.e. existing orthosis prescriptions), ΔGDI is only +0.4 points on average. Using the orthosis prediction model, average ΔGDI for orthosis users was estimated to improve to +5.6 points. The results of this study suggest that an orthosis selection model derived from the RFA can significantly improve outcomes from orthosis use for the diplegic CP population. Further validation of the model is warranted using data from other centers and a prospective study.

  18. Statistical power of model selection strategies for genome-wide association studies.

    PubMed

    Wu, Zheyang; Zhao, Hongyu

    2009-07-01

    Genome-wide association studies (GWAS) aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the Comprehensive R Archive Network

  19. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  20. Development of Decision Model for Selection of Appropriate Power Generation System Using Distance Based Approach Method

    NASA Astrophysics Data System (ADS)

    Widiyanto, Anugerah; Kato, Seizo; Maruyama, Naoki

    For solving decision problems in electric generation planning, a matrix operation based deterministic quantitative model called the Distance Based Approach (DBA) has been proposed for comparing the technical-economical and environmental features of various electric power plants. The customized computer code is developed to evaluate the overall function of alternative energy systems from the performance pattern corresponding to the selected energy attributes. For the purpose of exploring the applicability and the effectiveness of the proposed model, the model is applied to decision problems concerning the selection of energy sources for power generation in Japan. The set of nine energy alternatives includes conventional and new energy technologies of oil fired-, natural gas fired-, coal fired-, nuclear power, hydropower, geothermal, solar photovoltaic, wind power and solar thermal plants. Also, a set of criteria for optimized selection includes five areas of concern; energy economy, energy security, environmental protection, socio-economic development and technological aspects for electric power generation. The result will be a ranking of alternative sources of energy based on the Euclidean composite distance of each alternative to the designated optimal source of energy.