Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
Selecting Single Model in Combination Forecasting Based on Cointegration Test and Encompassing Test
Jiang, Chuanjin; Zhang, Jing; Song, Fugen
2014-01-01
Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability. PMID:24892061
Selecting single model in combination forecasting based on cointegration test and encompassing test.
Jiang, Chuanjin; Zhang, Jing; Song, Fugen
2014-01-01
Combination forecasting takes all characters of each single forecasting method into consideration, and combines them to form a composite, which increases forecasting accuracy. The existing researches on combination forecasting select single model randomly, neglecting the internal characters of the forecasting object. After discussing the function of cointegration test and encompassing test in the selection of single model, supplemented by empirical analysis, the paper gives the single model selection guidance: no more than five suitable single models can be selected from many alternative single models for a certain forecasting target, which increases accuracy and stability.
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
Fantasy-Testing-Assessment: A Proposed Model for the Investigation of Mate Selection.
ERIC Educational Resources Information Center
Nofz, Michael P.
1984-01-01
Proposes a model for mate selection which outlines three modes of interpersonal relating--fantasy, testing, and assessment (FTA). The model is viewed as a more accurate representation of mate selection processes than suggested by earlier theories, and can be used to clarify couples' understandings of their own relationships. (JAC)
The Performance of IRT Model Selection Methods with Mixed-Format Tests
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2012-01-01
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
A model for field toxicity tests
Kaiser, Mark S.; Finger, Susan E.
1996-01-01
Toxicity tests conducted under field conditions present an interesting challenge for statistical modelling. In contrast to laboratory tests, the concentrations of potential toxicants are not held constant over the test. In addition, the number and identity of toxicants that belong in a model as explanatory factors are not known and must be determined through a model selection process. We present one model to deal with these needs. This model takes the record of mortalities to form a multinomial distribution in which parameters are modelled as products of conditional daily survival probabilities. These conditional probabilities are in turn modelled as logistic functions of the explanatory factors. The model incorporates lagged values of the explanatory factors to deal with changes in the pattern of mortalities over time. The issue of model selection and assessment is approached through the use of generalized information criteria and power divergence goodness-of-fit tests. These model selection criteria are applied in a cross-validation scheme designed to assess the ability of a model to both fit data used in estimation and predict data deleted from the estimation data set. The example presented demonstrates the need for inclusion of lagged values of the explanatory factors and suggests that penalized likelihood criteria may not provide adequate protection against overparameterized models in model selection.
A Rational Analysis of the Selection Task as Optimal Data Selection.
ERIC Educational Resources Information Center
Oaksford, Mike; Chater, Nick
1994-01-01
Experimental data on human reasoning in hypothesis-testing tasks is reassessed in light of a Bayesian model of optimal data selection in inductive hypothesis testing. The rational analysis provided by the model suggests that reasoning in such tasks may be rational rather than subject to systematic bias. (SLD)
Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew
National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes.
Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew
2016-01-01
National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes. PMID:26877587
Wrong Answers on Multiple-Choice Achievement Tests: Blind Guesses or Systematic Choices?.
ERIC Educational Resources Information Center
Powell, J. C.
A multi-faceted model for the selection of answers for multiple-choice tests was developed from the findings of a series of exploratory studies. This model implies that answer selection should be curvilinear. A series of models were tested for fit using the chi square procedure. Data were collected from 359 elementary school students ages 9-12.…
Williamson, Scott; Fledel-Alon, Adi; Bustamante, Carlos D
2004-09-01
We develop a Poisson random-field model of polymorphism and divergence that allows arbitrary dominance relations in a diploid context. This model provides a maximum-likelihood framework for estimating both selection and dominance parameters of new mutations using information on the frequency spectrum of sequence polymorphisms. This is the first DNA sequence-based estimator of the dominance parameter. Our model also leads to a likelihood-ratio test for distinguishing nongenic from genic selection; simulations indicate that this test is quite powerful when a large number of segregating sites are available. We also use simulations to explore the bias in selection parameter estimates caused by unacknowledged dominance relations. When inference is based on the frequency spectrum of polymorphisms, genic selection estimates of the selection parameter can be very strongly biased even for minor deviations from the genic selection model. Surprisingly, however, when inference is based on polymorphism and divergence (McDonald-Kreitman) data, genic selection estimates of the selection parameter are nearly unbiased, even for completely dominant or recessive mutations. Further, we find that weak overdominant selection can increase, rather than decrease, the substitution rate relative to levels of polymorphism. This nonintuitive result has major implications for the interpretation of several popular tests of neutrality.
Using Response Times for Item Selection in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.
2008-01-01
Response times on items can be used to improve item selection in adaptive testing provided that a probabilistic model for their distribution is available. In this research, the author used a hierarchical modeling framework with separate first-level models for the responses and response times and a second-level model for the distribution of the…
Evaluating the habitat capability model for Merriam's turkeys
Mark A. Rumble; Stanley H. Anderson
1995-01-01
Habitat capability (HABCAP) models for wildlife assist land managers in predicting the consequences of their management decisions. Models must be tested and refined prior to using them in management planning. We tested the predicted patterns of habitat selection of the R2 HABCAP model using observed patterns of habitats selected by radio-marked Merriamâs turkey (
A Comparison of the One-and Three-Parameter Logistic Models on Measures of Test Efficiency.
ERIC Educational Resources Information Center
Benson, Jeri
Two methods of item selection were used to select sets of 40 items from a 50-item verbal analogies test, and the resulting item sets were compared for relative efficiency. The BICAL program was used to select the 40 items having the best mean square fit to the one parameter logistic (Rasch) model. The LOGIST program was used to select the 40 items…
Targeted versus statistical approaches to selecting parameters for modelling sediment provenance
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick
2017-04-01
One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
The 727 airplane target thrust reverser static performance model test for refanned JT8D engines
NASA Technical Reports Server (NTRS)
Chow, C. T. P.; Atkey, E. N.
1974-01-01
The results of a scale model static performance test of target thrust reverser configurations for the Pratt and Whitney Aircraft JT8D-100 series engine are presented. The objective of the test was to select a series of suitable candidate reverser configurations for the subsequent airplane model wind tunnel ingestion and flight controls tests. Test results indicate that adequate reverse thrust performance with compatible engine airflow match is achievable for the selected configurations. Tapering of the lips results in loss of performance and only minimal flow directivity. Door pressure surveys were conducted on a selected number of lip and fence configurations to obtain data to support the design of the thrust reverser system.
10 CFR 431.135 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.135 Units to be tested. For each basic model of automatic commercial ice maker selected for testing, a sample of sufficient size shall be selected...
A Parameter Subset Selection Algorithm for Mixed-Effects Models
Schmidt, Kathleen L.; Smith, Ralph C.
2016-01-01
Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less
Design Of Computer Based Test Using The Unified Modeling Language
NASA Astrophysics Data System (ADS)
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
Acoustic Model Testing Chronology
NASA Technical Reports Server (NTRS)
Nesman, Tom
2017-01-01
Scale models have been used for decades to replicate liftoff environments and in particular acoustics for launch vehicles. It is assumed, and analyses supports, that the key characteristics of noise generation, propagation, and measurement can be scaled. Over time significant insight was gained not just towards understanding the effects of thruster details, pad geometry, and sound mitigation but also to the physical processes involved. An overview of a selected set of scale model tests are compiled here to illustrate the variety of configurations that have been tested and the fundamental knowledge gained. The selected scale model tests are presented chronologically.
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Code of Federal Regulations, 2010 CFR
2010-07-01
... certification; test fleet selections; determinations of parameters subject to adjustment for certification and..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas...; test fleet selections; determinations of parameters subject to adjustment for certification and...
Code of Federal Regulations, 2011 CFR
2011-07-01
... certification; test fleet selections; determinations of parameters subject to adjustment for certification and..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas...; test fleet selections; determinations of parameters subject to adjustment for certification and...
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
ERIC Educational Resources Information Center
Van Zalk, Maarten Herman Walter; Kerr, Margaret; Branje, Susan J. T.; Stattin, Hakan; Meeus, Wim H. J.
2010-01-01
The authors of this study tested a selection-influence-de-selection model of depression. This model explains friendship influence processes (i.e., friends' depressive symptoms increase adolescents' depressive symptoms) while controlling for two processes: friendship selection (i.e., selection of friends with similar levels of depressive symptoms)…
Selecting a Response in Task Switching: Testing a Model of Compound Cue Retrieval
ERIC Educational Resources Information Center
Schneider, Darryl W.; Logan, Gordon D.
2009-01-01
How can a task-appropriate response be selected for an ambiguous target stimulus in task-switching situations? One answer is to use compound cue retrieval, whereby stimuli serve as joint retrieval cues to select a response from long-term memory. In the present study, the authors tested how well a model of compound cue retrieval could account for a…
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
Model-Based Diagnosis in a Power Distribution Test-Bed
NASA Technical Reports Server (NTRS)
Scarl, E.; McCall, K.
1998-01-01
The Rodon model-based diagnosis shell was applied to a breadboard test-bed, modeling an automated power distribution system. The constraint-based modeling paradigm and diagnostic algorithm were found to adequately represent the selected set of test scenarios.
IRT Model Selection Methods for Dichotomous Items
ERIC Educational Resources Information Center
Kang, Taehoon; Cohen, Allan S.
2007-01-01
Fit of the model to the data is important if the benefits of item response theory (IRT) are to be obtained. In this study, the authors compared model selection results using the likelihood ratio test, two information-based criteria, and two Bayesian methods. An example illustrated the potential for inconsistency in model selection depending on…
Adjusting HIV prevalence estimates for non-participation: an application to demographic surveillance
McGovern, Mark E.; Marra, Giampiero; Radice, Rosalba; Canning, David; Newell, Marie-Louise; Bärnighausen, Till
2015-01-01
Introduction HIV testing is a cornerstone of efforts to combat the HIV epidemic, and testing conducted as part of surveillance provides invaluable data on the spread of infection and the effectiveness of campaigns to reduce the transmission of HIV. However, participation in HIV testing can be low, and if respondents systematically select not to be tested because they know or suspect they are HIV positive (and fear disclosure), standard approaches to deal with missing data will fail to remove selection bias. We implemented Heckman-type selection models, which can be used to adjust for missing data that are not missing at random, and established the extent of selection bias in a population-based HIV survey in an HIV hyperendemic community in rural South Africa. Methods We used data from a population-based HIV survey carried out in 2009 in rural KwaZulu-Natal, South Africa. In this survey, 5565 women (35%) and 2567 men (27%) provided blood for an HIV test. We accounted for missing data using interviewer identity as a selection variable which predicted consent to HIV testing but was unlikely to be independently associated with HIV status. Our approach involved using this selection variable to examine the HIV status of residents who would ordinarily refuse to test, except that they were allocated a persuasive interviewer. Our copula model allows for flexibility when modelling the dependence structure between HIV survey participation and HIV status. Results For women, our selection model generated an HIV prevalence estimate of 33% (95% CI 27–40) for all people eligible to consent to HIV testing in the survey. This estimate is higher than the estimate of 24% generated when only information from respondents who participated in testing is used in the analysis, and the estimate of 27% when imputation analysis is used to predict missing data on HIV status. For men, we found an HIV prevalence of 25% (95% CI 15–35) using the selection model, compared to 16% among those who participated in testing, and 18% estimated with imputation. We provide new confidence intervals that correct for the fact that the relationship between testing and HIV status is unknown and requires estimation. Conclusions We confirm the feasibility and value of adopting selection models to account for missing data in population-based HIV surveys and surveillance systems. Elements of survey design, such as interviewer identity, present the opportunity to adopt this approach in routine applications. Where non-participation is high, true confidence intervals are much wider than those generated by standard approaches to dealing with missing data suggest. PMID:26613900
Cohen, Robert; Bidet, Philippe; Elbez, Annie; Levy, Corinne; Bossuyt, Patrick M.; Chalumeau, Martin
2017-01-01
Background There is controversy whether physicians can rely on signs and symptoms to select children with pharyngitis who should undergo a rapid antigen detection test (RADT) for group A streptococcus (GAS). Our objective was to evaluate the efficiency of signs and symptoms in selectively testing children with pharyngitis. Materials and methods In this multicenter, prospective, cross-sectional study, French primary care physicians collected clinical data and double throat swabs from 676 consecutive children with pharyngitis; the first swab was used for the RADT and the second was used for a throat culture (reference standard). We developed a logistic regression model combining signs and symptoms with GAS as the outcome. We then derived a model-based selective testing strategy, assuming that children with low and high calculated probability of GAS (<0.12 and >0.85) would be managed without the RADT. Main outcomes and measures were performance of the model (c-index and calibration) and efficiency of the model-based strategy (proportion of participants in whom RADT could be avoided). Results Throat culture was positive for GAS in 280 participants (41.4%). Out of 17 candidate signs and symptoms, eight were retained in the prediction model. The model had an optimism-corrected c-index of 0.73; calibration of the model was good. With the model-based strategy, RADT could be avoided in 6.6% of participants (95% confidence interval 4.7% to 8.5%), as compared to a RADT-for-all strategy. Conclusions This study demonstrated that relying on signs and symptoms for selectively testing children with pharyngitis is not efficient. We recommend using a RADT in all children with pharyngitis. PMID:28235012
Cohen, Jérémie F; Cohen, Robert; Bidet, Philippe; Elbez, Annie; Levy, Corinne; Bossuyt, Patrick M; Chalumeau, Martin
2017-01-01
There is controversy whether physicians can rely on signs and symptoms to select children with pharyngitis who should undergo a rapid antigen detection test (RADT) for group A streptococcus (GAS). Our objective was to evaluate the efficiency of signs and symptoms in selectively testing children with pharyngitis. In this multicenter, prospective, cross-sectional study, French primary care physicians collected clinical data and double throat swabs from 676 consecutive children with pharyngitis; the first swab was used for the RADT and the second was used for a throat culture (reference standard). We developed a logistic regression model combining signs and symptoms with GAS as the outcome. We then derived a model-based selective testing strategy, assuming that children with low and high calculated probability of GAS (<0.12 and >0.85) would be managed without the RADT. Main outcomes and measures were performance of the model (c-index and calibration) and efficiency of the model-based strategy (proportion of participants in whom RADT could be avoided). Throat culture was positive for GAS in 280 participants (41.4%). Out of 17 candidate signs and symptoms, eight were retained in the prediction model. The model had an optimism-corrected c-index of 0.73; calibration of the model was good. With the model-based strategy, RADT could be avoided in 6.6% of participants (95% confidence interval 4.7% to 8.5%), as compared to a RADT-for-all strategy. This study demonstrated that relying on signs and symptoms for selectively testing children with pharyngitis is not efficient. We recommend using a RADT in all children with pharyngitis.
Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing
NASA Astrophysics Data System (ADS)
Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel
Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.
Tučník, Petr; Bureš, Vladimír
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the-server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models.
On selecting evidence to test hypotheses: A theory of selection tasks.
Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N
2018-05-21
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Safran, Rebecca J; Scordato, Elizabeth S C; Symes, Laurel B; Rodríguez, Rafael L; Mendelson, Tamra C
2013-11-01
Speciation by divergent natural selection is well supported. However, the role of sexual selection in speciation is less well understood due to disagreement about whether sexual selection is a mechanism of evolution separate from natural selection, as well as confusion about various models and tests of sexual selection. Here, we outline how sexual selection and natural selection are different mechanisms of evolutionary change, and suggest that this distinction is critical when analyzing the role of sexual selection in speciation. Furthermore, we clarify models of sexual selection with respect to their interaction with ecology and natural selection. In doing so, we outline a research agenda for testing hypotheses about the relative significance of divergent sexual and natural selection in the evolution of reproductive isolation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity
NASA Astrophysics Data System (ADS)
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity
NASA Astrophysics Data System (ADS)
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Learning epistatic interactions from sequence-activity data to predict enantioselectivity.
Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael
2017-12-01
Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.
Bruni, Renato; Cesarone, Francesco; Scozzari, Andrea; Tardella, Fabio
2016-09-01
A large number of portfolio selection models have appeared in the literature since the pioneering work of Markowitz. However, even when computational and empirical results are described, they are often hard to replicate and compare due to the unavailability of the datasets used in the experiments. We provide here several datasets for portfolio selection generated using real-world price values from several major stock markets. The datasets contain weekly return values, adjusted for dividends and for stock splits, which are cleaned from errors as much as possible. The datasets are available in different formats, and can be used as benchmarks for testing the performances of portfolio selection models and for comparing the efficiency of the algorithms used to solve them. We also provide, for these datasets, the portfolios obtained by several selection strategies based on Stochastic Dominance models (see "On Exact and Approximate Stochastic Dominance Strategies for Portfolio Selection" (Bruni et al. [2])). We believe that testing portfolio models on publicly available datasets greatly simplifies the comparison of the different portfolio selection strategies.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Scrams, David J.; Schnipke, Deborah L.
This paper proposes an item selection algorithm that can be used to neutralize the effect of time limits in computer adaptive testing. The method is based on a statistical model for the response-time distributions of the test takers on the items in the pool that is updated each time a new item has been administered. Predictions from the model are…
Morrissey, Karyn; Kinderman, Peter; Pontin, Eleanor; Tai, Sara; Schwannauer, Mathias
2016-08-01
In June 2011 the BBC Lab UK carried out a web-based survey on the causes of mental distress. The 'Stress Test' was launched on 'All in the Mind' a BBC Radio 4 programme and the test's URL was publicised on radio and TV broadcasts, and made available via BBC web pages and social media. Given the large amount of data created, over 32,800 participants, with corresponding diagnosis, demographic and socioeconomic characteristics; the dataset are potentially an important source of data for population based research on depression and anxiety. However, as respondents self-selected to participate in the online survey, the survey may comprise a non-random sample. It may be only individuals that listen to BBC Radio 4 and/or use their website that participated in the survey. In this instance using the Stress Test data for wider population based research may create sample selection bias. Focusing on the depression component of the Stress Test, this paper presents an easy-to-use method, the Two Step Probit Selection Model, to detect and statistically correct selection bias in the Stress Test. Using a Two Step Probit Selection Model; this paper did not find a statistically significant selection on unobserved factors for participants of the Stress Test. That is, survey participants who accessed and completed an online survey are not systematically different from non-participants on the variables of substantive interest. Copyright © 2016 Elsevier Ltd. All rights reserved.
Automating an integrated spatial data-mining model for landfill site selection
NASA Astrophysics Data System (ADS)
Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Aziz, Hamidi Abdul
2017-10-01
An integrated programming environment represents a robust approach to building a valid model for landfill site selection. One of the main challenges in the integrated model is the complicated processing and modelling due to the programming stages and several limitations. An automation process helps avoid the limitations and improve the interoperability between integrated programming environments. This work targets the automation of a spatial data-mining model for landfill site selection by integrating between spatial programming environment (Python-ArcGIS) and non-spatial environment (MATLAB). The model was constructed using neural networks and is divided into nine stages distributed between Matlab and Python-ArcGIS. A case study was taken from the north part of Peninsular Malaysia. 22 criteria were selected to utilise as input data and to build the training and testing datasets. The outcomes show a high-performance accuracy percentage of 98.2% in the testing dataset using 10-fold cross validation. The automated spatial data mining model provides a solid platform for decision makers to performing landfill site selection and planning operations on a regional scale.
Rácz, A; Bajusz, D; Héberger, K
2015-01-01
Recent implementations of QSAR modelling software provide the user with numerous models and a wealth of information. In this work, we provide some guidance on how one should interpret the results of QSAR modelling, compare and assess the resulting models, and select the best and most consistent ones. Two QSAR datasets are applied as case studies for the comparison of model performance parameters and model selection methods. We demonstrate the capabilities of sum of ranking differences (SRD) in model selection and ranking, and identify the best performance indicators and models. While the exchange of the original training and (external) test sets does not affect the ranking of performance parameters, it provides improved models in certain cases (despite the lower number of molecules in the training set). Performance parameters for external validation are substantially separated from the other merits in SRD analyses, highlighting their value in data fusion.
ERIC Educational Resources Information Center
Dolan, Conor V.; Molenaar, Peter C. M.
1994-01-01
In multigroup covariance structure analysis with structured means, the traditional latent selection model is formulated as a special case of phenotypic selection. Illustrations with real and simulated data demonstrate how one can test specific hypotheses concerning selection on latent variables. (SLD)
2016-01-01
Multi-criteria decision-making (MCDM) can be formally implemented by various methods. This study compares suitability of four selected MCDM methods, namely WPM, TOPSIS, VIKOR, and PROMETHEE, for future applications in agent-based computational economic (ACE) models of larger scale (i.e., over 10 000 agents in one geographical region). These four MCDM methods were selected according to their appropriateness for computational processing in ACE applications. Tests of the selected methods were conducted on four hardware configurations. For each method, 100 tests were performed, which represented one testing iteration. With four testing iterations conducted on each hardware setting and separated testing of all configurations with the–server parameter de/activated, altogether, 12800 data points were collected and consequently analyzed. An illustrational decision-making scenario was used which allows the mutual comparison of all of the selected decision making methods. Our test results suggest that although all methods are convenient and can be used in practice, the VIKOR method accomplished the tests with the best results and thus can be recommended as the most suitable for simulations of large-scale agent-based models. PMID:27806061
Best Design for Multidimensional Computerized Adaptive Testing With the Bifactor Model
Seo, Dong Gi; Weiss, David J.
2015-01-01
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm (MCAT) with a bifactor model using simulated data. Four item selection methods in MCAT were examined for three bifactor pattern designs using two multidimensional item response theory models. To compare MCAT item selection and estimation methods, a fixed test length was used. The Ds-optimality item selection improved θ estimates with respect to a general factor, and either D- or A-optimality improved estimates of the group factors in three bifactor pattern designs under two multidimensional item response theory models. The MCAT model without a guessing parameter functioned better than the MCAT model with a guessing parameter. The MAP (maximum a posteriori) estimation method provided more accurate θ estimates than the EAP (expected a posteriori) method under most conditions, and MAP showed lower observed standard errors than EAP under most conditions, except for a general factor condition using Ds-optimality item selection. PMID:29795848
Conducting field studies for testing pesticide leaching models
Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.
1990-01-01
A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.
Variability aware compact model characterization for statistical circuit design optimization
NASA Astrophysics Data System (ADS)
Qiao, Ying; Qian, Kun; Spanos, Costas J.
2012-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose an efficient variabilityaware compact model characterization methodology based on the linear propagation of variance. Hierarchical spatial variability patterns of selected compact model parameters are directly calculated from transistor array test structures. This methodology has been implemented and tested using transistor I-V measurements and the EKV-EPFL compact model. Calculation results compare well to full-wafer direct model parameter extractions. Further studies are done on the proper selection of both compact model parameters and electrical measurement metrics used in the method.
Howard B. Stauffer; Cynthia J. Zabel; Jeffrey R. Dunk
2005-01-01
We compared a set of competing logistic regression habitat selection models for Northern Spotted Owls (Strix occidentalis caurina) in California. The habitat selection models were estimated, compared, evaluated, and tested using multiple sample datasets collected on federal forestlands in northern California. We used Bayesian methods in interpreting...
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
A Feedback Control Strategy for Enhancing Item Selection Efficiency in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Weissman, Alexander
2006-01-01
A computerized adaptive test (CAT) may be modeled as a closed-loop system, where item selection is influenced by trait level ([theta]) estimation and vice versa. When discrepancies exist between an examinee's estimated and true [theta] levels, nonoptimal item selection is a likely result. Nevertheless, examinee response behavior consistent with…
Concept Selection and Developmental Effects in Bilingual Speech Production
ERIC Educational Resources Information Center
Schwieter, John; Sunderman, Gretchen
2009-01-01
The present study investigates the locus of language selection in less and more proficient language learners, specifically testing differential predictions of La Heij's (2005) concept selection model (CSM) and Kroll and Stewart's (1994) revised hierarchical model (RHM). Less and more proficient English dominant learners of Spanish participated in…
Simulation of flow and water quality of the Arroyo Colorado, Texas, 1989-99
Raines, Timothy H.; Miranda, Roger M.
2002-01-01
A model parameter set for use with the Hydrological Simulation Program—FORTRAN watershed model was developed to simulate flow and water quality for selected properties and constituents for the Arroyo Colorado from the city of Mission to the Laguna Madre, Texas. The model simulates flow, selected water-quality properties, and constituent concentrations. The model can be used to estimate a total maximum daily load for selected properties and constituents in the Arroyo Colorado. The model was calibrated and tested for flow with data measured during 1989–99 at three streamflow-gaging stations. The errors for total flow volume ranged from -0.1 to 29.0 percent, and the errors for total storm volume ranged from -15.6 to 8.4 percent. The model was calibrated and tested for water quality for seven properties and constituents with 1989–99 data. The model was calibrated sequentially for suspended sediment, water temperature, biochemical oxygen demand, dissolved oxygen, nitrate nitrogen, ammonia nitrogen, and orthophosphate. The simulated concentrations of the selected properties and constituents generally matched the measured concentrations available for the calibration and testing periods. The model was used to simulate total point- and nonpoint-source loads for selected properties and constituents for 1989–99 for urban, natural, and agricultural land-use types. About one-third to one-half of the biochemical oxygen demand and nutrient loads are from urban point and nonpoint sources, although only 13 percent of the total land use in the basin is urban.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo
2008-10-29
We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.
An Evaluation of a Testing Model for Listening Comprehension.
ERIC Educational Resources Information Center
Kangli, Ji
A model for testing listening comprehension in English as a Second Language is discussed and compared with the Test for English Majors (TEM). The model in question incorporates listening for: (1) understanding factual information; (2) comprehension and interpretation; (3) detailed and selective information; (4) global ideas; (5) on-line tasks…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Vio, Gareth A.; Andrianne, Thomas; azak, Norizham Abudl; Dimitriadis, Grigorios
2012-01-01
The stall flutter response of a rectangular wing in a low speed wind tunnel is modelled using a nonlinear difference equation description. Static and dynamic tests are used to select a suitable model structure and basis function. Bifurcation criteria such as the Hopf condition and vibration amplitude variation with airspeed were used to ensure the model was representative of experimentally measured stall flutter phenomena. Dynamic test data were used to estimate model parameters and estimate an approximate basis function.
Application of nomographs for analysis and prediction of receiver spurious response EMI
NASA Astrophysics Data System (ADS)
Heather, F. W.
1985-07-01
Spurious response EMI for the front end of a superheterodyne receiver follows a simple mathematic formula; however, the application of the formula to predict test frequencies produces more data than can be evaluated. An analysis technique has been developed to graphically depict all receiver spurious responses usig a nomograph and to permit selection of optimum test frequencies. The discussion includes the math model used to simulate a superheterodyne receiver, the implementation of the model in the computer program, the approach to test frequency selection, interpretation of the nomographs, analysis and prediction of receiver spurious response EMI from the nomographs, and application of the nomographs. In addition, figures are provided of sample applications. This EMI analysis and prediction technique greatly improves the Electromagnetic Compatibility (EMC) test engineer's ability to visualize the scope of receiver spurious response EMI testing and optimize test frequency selection.
Regression Model Term Selection for the Analysis of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred; Volden, Thomas R.
2010-01-01
The paper discusses the selection of regression model terms for the analysis of wind tunnel strain-gage balance calibration data. Different function class combinations are presented that may be used to analyze calibration data using either a non-iterative or an iterative method. The role of the intercept term in a regression model of calibration data is reviewed. In addition, useful algorithms and metrics originating from linear algebra and statistics are recommended that will help an analyst (i) to identify and avoid both linear and near-linear dependencies between regression model terms and (ii) to make sure that the selected regression model of the calibration data uses only statistically significant terms. Three different tests are suggested that may be used to objectively assess the predictive capability of the final regression model of the calibration data. These tests use both the original data points and regression model independent confirmation points. Finally, data from a simplified manual calibration of the Ames MK40 balance is used to illustrate the application of some of the metrics and tests to a realistic calibration data set.
Modeling extreme PM10 concentration in Malaysia using generalized extreme value distribution
NASA Astrophysics Data System (ADS)
Hasan, Husna; Mansor, Nadiah; Salleh, Nur Hanim Mohd
2015-05-01
Extreme PM10 concentration from the Air Pollutant Index (API) at thirteen monitoring stations in Malaysia is modeled using the Generalized Extreme Value (GEV) distribution. The data is blocked into monthly selection period. The Mann-Kendall (MK) test suggests a non-stationary model so two models are considered for the stations with trend. The likelihood ratio test is used to determine the best fitted model and the result shows that only two stations favor the non-stationary model (Model 2) while the other eleven stations favor stationary model (Model 1). The return level of PM10 concentration that is expected to exceed the maximum once within a selected period is obtained.
ERIC Educational Resources Information Center
Eignor, Daniel R.; Douglass, James B.
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
A Chain-Retrieval Model for Voluntary Task Switching
ERIC Educational Resources Information Center
Vandierendonck, Andre; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick
2012-01-01
To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved…
A Demonstration of Regression False Positive Selection in Data Mining
ERIC Educational Resources Information Center
Pinder, Jonathan P.
2014-01-01
Business analytics courses, such as marketing research, data mining, forecasting, and advanced financial modeling, have substantial predictive modeling components. The predictive modeling in these courses requires students to estimate and test many linear regressions. As a result, false positive variable selection ("type I errors") is…
A comparison of three models for determining test fairness.
DOT National Transportation Integrated Search
1979-01-01
There are three prominent models of test fairness in the dichotomous situation: : (a) Thorndike's Constant Ratio model (the ratio of the proportion successful to the proportion selected should be equal for the majority and the minority group); : (b) ...
Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items
ERIC Educational Resources Information Center
Penfield, Randall D.
2006-01-01
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-12-09
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.
Ru, Sushan; Hardner, Craig; Carter, Patrick A; Evans, Kate; Main, Dorrie; Peace, Cameron
2016-01-01
Seedling selection identifies superior seedlings as candidate cultivars based on predicted genetic potential for traits of interest. Traditionally, genetic potential is determined by phenotypic evaluation. With the availability of DNA tests for some agronomically important traits, breeders have the opportunity to include DNA information in their seedling selection operations—known as marker-assisted seedling selection. A major challenge in deploying marker-assisted seedling selection in clonally propagated crops is a lack of knowledge in genetic gain achievable from alternative strategies. Existing models based on additive effects considering seed-propagated crops are not directly relevant for seedling selection of clonally propagated crops, as clonal propagation captures all genetic effects, not just additive. This study modeled genetic gain from traditional and various marker-based seedling selection strategies on a single trait basis through analytical derivation and stochastic simulation, based on a generalized seedling selection scheme of clonally propagated crops. Various trait-test scenarios with a range of broad-sense heritability and proportion of genotypic variance explained by DNA markers were simulated for two populations with different segregation patterns. Both derived and simulated results indicated that marker-based strategies tended to achieve higher genetic gain than phenotypic seedling selection for a trait where the proportion of genotypic variance explained by marker information was greater than the broad-sense heritability. Results from this study provides guidance in optimizing genetic gain from seedling selection for single traits where DNA tests providing marker information are available. PMID:27148453
NASA Astrophysics Data System (ADS)
He, Song-Bing; Ben Hu; Kuang, Zheng-Kun; Wang, Dong; Kong, De-Xin
2016-11-01
Adenosine receptors (ARs) are potential therapeutic targets for Parkinson’s disease, diabetes, pain, stroke and cancers. Prediction of subtype selectivity is therefore important from both therapeutic and mechanistic perspectives. In this paper, we introduced a shape similarity profile as molecular descriptor, namely three-dimensional biologically relevant spectrum (BRS-3D), for AR selectivity prediction. Pairwise regression and discrimination models were built with the support vector machine methods. The average determination coefficient (r2) of the regression models was 0.664 (for test sets). The 2B-3 (A2B vs A3) model performed best with q2 = 0.769 for training sets (10-fold cross-validation), and r2 = 0.766, RMSE = 0.828 for test sets. The models’ robustness and stability were validated with 100 times resampling and 500 times Y-randomization. We compared the performance of BRS-3D with 3D descriptors calculated by MOE. BRS-3D performed as good as, or better than, MOE 3D descriptors. The performances of the discrimination models were also encouraging, with average accuracy (ACC) 0.912 and MCC 0.792 (test set). The 2A-3 (A2A vs A3) selectivity discrimination model (ACC = 0.882 and MCC = 0.715 for test set) outperformed an earlier reported one (ACC = 0.784). These results demonstrated that, through multiple conformation encoding, BRS-3D can be used as an effective molecular descriptor for AR subtype selectivity prediction.
An Evaluation of Some Models for Culture-Fair Selection.
ERIC Educational Resources Information Center
Petersen, Nancy S.; Novick, Melvin R.
Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…
Algamal, Z Y; Lee, M H
2017-01-01
A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.
ERIC Educational Resources Information Center
Ho, Tsung-Han
2010-01-01
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Forecasting volatility with neural regression: a contribution to model adequacy.
Refenes, A N; Holt, W T
2001-01-01
Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.
Rational selection of training and test sets for the development of validated QSAR models
NASA Astrophysics Data System (ADS)
Golbraikh, Alexander; Shen, Min; Xiao, Zhiyan; Xiao, Yun-De; Lee, Kuo-Hsiung; Tropsha, Alexander
2003-02-01
Quantitative Structure-Activity Relationship (QSAR) models are used increasingly to screen chemical databases and/or virtual chemical libraries for potentially bioactive molecules. These developments emphasize the importance of rigorous model validation to ensure that the models have acceptable predictive power. Using k nearest neighbors ( kNN) variable selection QSAR method for the analysis of several datasets, we have demonstrated recently that the widely accepted leave-one-out (LOO) cross-validated R2 (q2) is an inadequate characteristic to assess the predictive ability of the models [Golbraikh, A., Tropsha, A. Beware of q2! J. Mol. Graphics Mod. 20, 269-276, (2002)]. Herein, we provide additional evidence that there exists no correlation between the values of q 2 for the training set and accuracy of prediction ( R 2) for the test set and argue that this observation is a general property of any QSAR model developed with LOO cross-validation. We suggest that external validation using rationally selected training and test sets provides a means to establish a reliable QSAR model. We propose several approaches to the division of experimental datasets into training and test sets and apply them in QSAR studies of 48 functionalized amino acid anticonvulsants and a series of 157 epipodophyllotoxin derivatives with antitumor activity. We formulate a set of general criteria for the evaluation of predictive power of QSAR models.
NASA Technical Reports Server (NTRS)
Mirick, Paul H.
1988-01-01
Seven cases were selected for correlation from a 1/5.86 Froude-scale experiment that examined several rotor designs which were being considered for full-scale flight testing as part of the Bearingless Main Rotor (BMR) program. The model rotor hub used in these tests consisted of back-to-back C-beams as flexbeam elements with a torque tube for pitch control. The first four cases selected from the experiment were hover tests which examined the effects on rotor stability of variations in hub-to-flexbeam coning, hub-to-flexbeam pitch, flexbeam-to-blade coning, and flexbeam-to-blade pitch. The final three cases were selected from the forward flight tests of optimum rotor configuration as defined during the hover test. The selected cases examined the effects of variations in forward speed, rotor speed, and shaft angle. Analytical results from Bell Helicopter Textron, Boeing Vertol, Sikorsky Aircraft, and the U.S. Army Aeromechanics Laboratory were compared with the data and the correlations ranged from poor-to-fair to fair-to-good.
Behavior of the maximum likelihood in quantum state tomography
NASA Astrophysics Data System (ADS)
Scholten, Travis L.; Blume-Kohout, Robin
2018-02-01
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) should not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.
Behavior of the maximum likelihood in quantum state tomography
Blume-Kohout, Robin J; Scholten, Travis L.
2018-02-22
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
Behavior of the maximum likelihood in quantum state tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blume-Kohout, Robin J; Scholten, Travis L.
Quantum state tomography on a d-dimensional system demands resources that grow rapidly with d. They may be reduced by using model selection to tailor the number of parameters in the model (i.e., the size of the density matrix). Most model selection methods typically rely on a test statistic and a null theory that describes its behavior when two models are equally good. Here, we consider the loglikelihood ratio. Because of the positivity constraint ρ ≥ 0, quantum state space does not generally satisfy local asymptotic normality (LAN), meaning the classical null theory for the loglikelihood ratio (the Wilks theorem) shouldmore » not be used. Thus, understanding and quantifying how positivity affects the null behavior of this test statistic is necessary for its use in model selection for state tomography. We define a new generalization of LAN, metric-projected LAN, show that quantum state space satisfies it, and derive a replacement for the Wilks theorem. In addition to enabling reliable model selection, our results shed more light on the qualitative effects of the positivity constraint on state tomography.« less
USDA-ARS?s Scientific Manuscript database
Breeding and selection for the traits with polygenic inheritance is a challenging task that can be done by phenotypic selection, by marker-assisted selection or by genome wide selection. We tested predictive ability of four selection models in a biparental population genotyped with 95 SNP markers an...
Divergent positive selection in rhodopsin from lake and riverine cichlid fishes.
Schott, Ryan K; Refvik, Shannon P; Hauser, Frances E; López-Fernández, Hernán; Chang, Belinda S W
2014-05-01
Studies of cichlid evolution have highlighted the importance of visual pigment genes in the spectacular radiation of the African rift lake cichlids. Recent work, however, has also provided strong evidence for adaptive diversification of riverine cichlids in the Neotropics, which inhabit environments of markedly different spectral properties from the African rift lakes. These ecological and/or biogeographic differences may have imposed divergent selective pressures on the evolution of the cichlid visual system. To test these hypotheses, we investigated the molecular evolution of the dim-light visual pigment, rhodopsin. We sequenced rhodopsin from Neotropical and African riverine cichlids and combined these data with published sequences from African cichlids. We found significant evidence for positive selection using random sites codon models in all cichlid groups, with the highest levels in African lake cichlids. Tests using branch-site and clade models that partitioned the data along ecological (lake, river) and/or biogeographic (African, Neotropical) boundaries found significant evidence of divergent selective pressures among cichlid groups. However, statistical comparisons among these models suggest that ecological, rather than biogeographic, factors may be responsible for divergent selective pressures that have shaped the evolution of the visual system in cichlids. We found that branch-site models did not perform as well as clade models for our data set, in which there was evidence for positive selection in the background. One of our most intriguing results is that the amino acid sites found to be under positive selection in Neotropical and African lake cichlids were largely nonoverlapping, despite falling into the same three functional categories: spectral tuning, retinal uptake/release, and rhodopsin dimerization. Taken together, these results would imply divergent selection across cichlid clades, but targeting similar functions. This study highlights the importance of molecular investigations of ecologically important groups and the flexibility of clade models in explicitly testing ecological hypotheses.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John BO
2008-01-01
Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21) and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors. PMID:18959785
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-01-01
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033
R. Johnson; K. Jayawickrama
2003-01-01
Gain from various orchard strategies were modeled. The scenario tested 2,000 first-generation open-pollinated families, from which orchards of 20 selections were formed, using either parents, progeny or both. This was followed by a second-generation breeding population in which 200 full-sib families were tested followed by a second-generation orchard of 20 selections....
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.
Children's selective trust decisions: rational competence and limiting performance factors.
Hermes, Jonas; Behne, Tanya; Bich, Anna Elisa; Thielert, Christa; Rakoczy, Hannes
2018-03-01
Recent research has amply documented that even preschoolers learn selectively from others, preferring, for example, reliable over unreliable and competent over incompetent models. It remains unclear, however, what the cognitive foundations of such selective learning are, in particular, whether it builds on rational inferences or on less sophisticated processes. The current study, therefore, was designed to test directly the possibility that children are in principle capable of selective learning based on rational inference, yet revert to simpler strategies such as global impression formation under certain circumstances. Preschoolers (N = 75) were shown pairs of models that either differed in their degree of competence within one domain (strong vs. weak or knowledgeable vs. ignorant) or were both highly competent, but in different domains (e.g., strong vs. knowledgeable model). In the test trials, children chose between the models for strength- or knowledge-related tasks. The results suggest that, in fact, children are capable of rational inference-based selective trust: when both models were highly competent, children preferred the model with the competence most predictive and relevant for a given task. However, when choosing between two models that differed in competence on one dimension, children reverted to halo-style wide generalizations and preferred the competent models for both relevant and irrelevant tasks. These findings suggest that the rational strategies for selective learning, that children master in principle, can get masked by various performance factors. © 2017 John Wiley & Sons Ltd.
Model building strategy for logistic regression: purposeful selection.
Zhang, Zhongheng
2016-03-01
Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.
NASA Astrophysics Data System (ADS)
Lanuru, Mahatma; Mashoreng, S.; Amri, K.
2018-03-01
The success of seagrass transplantation is very much depending on the site selection and suitable transplantation methods. The main objective of this study is to develop and use a site-selection model to identify the suitability of sites for seagrass (Enhalus acoroides) transplantation. Model development was based on the physical and biological characteristics of the transplantation site. The site-selection process is divided into 3 phases: Phase I identifies potential seagrass habitat using available knowledge, removes unnecessary sites before the transplantation test is performed. Phase II involves field assessment and transplantation test of the best scoring areas identified in Phase I. Phase III is the final calculation of the TSI (Transplant Suitability Index), based on results from Phases I and II. The model was used to identify the suitability of sites for seagrass transplantation in the West coast of South Sulawesi (3 sites at Labakkang Coast, 3 sites at Awerange Bay, and 3 sites at Lale-Lae Island). Of the 9 sites, two sites were predicted by the site-selection model to be the most suitable sites for seagrass transplantation: Site II at Labakkang Coast and Site III at Lale-Lae Island.
HYBRID SNCR-SCR TECHNOLOGIES FOR NOX CONTROL: MODELING AND EXPERIMENT
The hybrid process of homogeneous gas-phase selective non-catalytic reduction (SNCR) followed by selective catalytic reduction (SCR) of nitric oxide (NO) was investigated through experimentation and modeling. Measurements, using NO-doped flue gas from a gas-fired 29 kW test combu...
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Currently, little justification is provided for nanomaterial testing concentrations in in vitro assays. The in vitro concentrations typically used may be higher than those experienced in exposed humans. Selection of concentration levels for hazard evaluation based on real-world ...
Multi-agent Reinforcement Learning Model for Effective Action Selection
NASA Astrophysics Data System (ADS)
Youk, Sang Jo; Lee, Bong Keun
Reinforcement learning is a sub area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. In the case of multi-agent, especially, which state space and action space gets very enormous in compared to single agent, so it needs to take most effective measure available select the action strategy for effective reinforcement learning. This paper proposes a multi-agent reinforcement learning model based on fuzzy inference system in order to improve learning collect speed and select an effective action in multi-agent. This paper verifies an effective action select strategy through evaluation tests based on Robocop Keep away which is one of useful test-beds for multi-agent. Our proposed model can apply to evaluate efficiency of the various intelligent multi-agents and also can apply to strategy and tactics of robot soccer system.
Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina
2017-01-01
A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
ERIC Educational Resources Information Center
Subrahmanyam, Annamdevula
2017-01-01
Purpose: This paper aims to identify and test four competing models with the interrelationships between students' perceived service quality, students' satisfaction, loyalty and motivation using structural equation modeling (SEM), and to select the best model using chi-square difference (??2) statistic test. Design/methodology/approach: The study…
Valente, Bruno D.; Morota, Gota; Peñagaricano, Francisco; Gianola, Daniel; Weigel, Kent; Rosa, Guilherme J. M.
2015-01-01
The term “effect” in additive genetic effect suggests a causal meaning. However, inferences of such quantities for selection purposes are typically viewed and conducted as a prediction task. Predictive ability as tested by cross-validation is currently the most acceptable criterion for comparing models and evaluating new methodologies. Nevertheless, it does not directly indicate if predictors reflect causal effects. Such evaluations would require causal inference methods that are not typical in genomic prediction for selection. This suggests that the usual approach to infer genetic effects contradicts the label of the quantity inferred. Here we investigate if genomic predictors for selection should be treated as standard predictors or if they must reflect a causal effect to be useful, requiring causal inference methods. Conducting the analysis as a prediction or as a causal inference task affects, for example, how covariates of the regression model are chosen, which may heavily affect the magnitude of genomic predictors and therefore selection decisions. We demonstrate that selection requires learning causal genetic effects. However, genomic predictors from some models might capture noncausal signal, providing good predictive ability but poorly representing true genetic effects. Simulated examples are used to show that aiming for predictive ability may lead to poor modeling decisions, while causal inference approaches may guide the construction of regression models that better infer the target genetic effect even when they underperform in cross-validation tests. In conclusion, genomic selection models should be constructed to aim primarily for identifiability of causal genetic effects, not for predictive ability. PMID:25908318
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
ERIC Educational Resources Information Center
Sheehan, Kathleen M.
2017-01-01
A model-based approach for matching language learners to texts of appropriate difficulty is described. Results are communicated to test takers via a targeted reading range expressed on the reporting scale of an automated text complexity measurement tool (ATCMT). Test takers can use this feedback to select reading materials that are well matched to…
Phase I Experimental Testing of a Generic Submarine Model in the DSTO Low Speed Wind Tunnel
2012-07-01
used during the tests , along with the test methodology . A sub-set of the data gathered is also presented and briefly discussed. Overall, the force...total pressure probe when positioned close to the model. 4. Results Selected results from the testing of the generic submarine model in the...Appendix B summaries the test conditions. 4.3.1 Smoke Generator and Probe An Aerotech smoke generator and probe were used for visualisation of
Seo, Dong Gi; Choi, Jeongwook
2018-05-17
Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification
ERIC Educational Resources Information Center
Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi
2012-01-01
This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…
Pretest information for a test to validate plume simulation procedures (FA-17)
NASA Technical Reports Server (NTRS)
Hair, L. M.
1978-01-01
The results of an effort to plan a final verification wind tunnel test to validate the recommended correlation parameters and application techniques were presented. The test planning effort was complete except for test site finalization and the associated coordination. Two suitable test sites were identified. Desired test conditions were shown. Subsequent sections of this report present the selected model and test site, instrumentation of this model, planned test operations, and some concluding remarks.
A CLIPS-based expert system for the evaluation and selection of robots
NASA Technical Reports Server (NTRS)
Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.
1994-01-01
This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.
A probabilistic method for testing and estimating selection differences between populations
He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li
2015-01-01
Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. PMID:26463656
Detecting Bias in Selection for Higher Education: Three Different Methods
ERIC Educational Resources Information Center
Kennet-Cohen, Tamar; Turvall, Elliot; Oren, Carmel
2014-01-01
This study examined selection bias in Israeli university admissions with respect to test language and gender, using three approaches for the detection of such bias: Cleary's model of differential prediction, boundary conditions for differential prediction and difference between "d's" (the Constant Ratio Model). The university admissions…
Comparisons of Means Using Exploratory and Confirmatory Approaches
ERIC Educational Resources Information Center
Kuiper, Rebecca M.; Hoijtink, Herbert
2010-01-01
This article discusses comparisons of means using exploratory and confirmatory approaches. Three methods are discussed: hypothesis testing, model selection based on information criteria, and Bayesian model selection. Throughout the article, an example is used to illustrate and evaluate the two approaches and the three methods. We demonstrate that…
Testing Different Model Building Procedures Using Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…
Conditional Covariance-Based Subtest Selection for DIMTEST
ERIC Educational Resources Information Center
Froelich, Amy G.; Habing, Brian
2008-01-01
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
Modeling HIV-1 Drug Resistance as Episodic Directional Selection
Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L.; Scheffler, Konrad
2012-01-01
The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance. PMID:22589711
Modeling HIV-1 drug resistance as episodic directional selection.
Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad
2012-01-01
The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.
ERIC Educational Resources Information Center
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
Application of a Multidimensional Nested Logit Model to Multiple-Choice Test Items
ERIC Educational Resources Information Center
Bolt, Daniel M.; Wollack, James A.; Suh, Youngsuk
2012-01-01
Nested logit models have been presented as an alternative to multinomial logistic models for multiple-choice test items (Suh and Bolt in "Psychometrika" 75:454-473, 2010) and possess a mathematical structure that naturally lends itself to evaluating the incremental information provided by attending to distractor selection in scoring. One potential…
Cryogenic materials selection, availability, and cost considerations
NASA Technical Reports Server (NTRS)
Rush, H. F.
1983-01-01
The selection of structural alloys, composite materials, solder alloys, and filler materials for use in cryogenic models is discussed. In particular, materials testing programs conducted at Langley are described.
10 CFR 431.295 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Refrigerated Bottled or Canned Beverage Vending Machines Test Procedures § 431.295 Units to be tested. For each basic model of refrigerated bottled or canned beverage vending machine selected for...
Dream Chaser Model Being Tested at Langley Research Center (LaRC
2013-07-11
NASA's Langley Research Center in Hampton, Va., recently conducted hypersonic testing of Dream Chaser models for SNC as part of the agency's Commercial Crew Program in order to obtain necessary data for the material selection and design of the TPS
Installation of child safety seats in selected 1988-1989 model year automobiles
DOT National Transportation Integrated Search
1989-06-01
The study tested whether currently marketed child safety seats are difficult to install in current model automobiles. The study also tested whether once installed, the child seats remain securely fastened when rocked or tilted. Thirteen toddler and f...
QCGAT mixer compound exhaust system design and static big model test report
NASA Technical Reports Server (NTRS)
Blackmore, W. L.; Thompson, C. E.
1978-01-01
A mixer exhaust system was designed to meet the proposed performance and exhaust jet noise goals for the AiResearch QCGAT engine. Some 0.35 scale models of the various nozzles were fabricated and aerodynamically and acoustically tested. Preliminary optimization, engine cycle matching, model test data and analysis are presented. A final mixer exhaust system is selected for optimum performance for the overall flight regime.
New consensus multivariate models based on PLS and ANN studies of sigma-1 receptor antagonists.
Oliveira, Aline A; Lipinski, Célio F; Pereira, Estevão B; Honorio, Kathia M; Oliveira, Patrícia R; Weber, Karen C; Romero, Roseli A F; de Sousa, Alexsandro G; da Silva, Albérico B F
2017-10-02
The treatment of neuropathic pain is very complex and there are few drugs approved for this purpose. Among the studied compounds in the literature, sigma-1 receptor antagonists have shown to be promising. In order to develop QSAR studies applied to the compounds of 1-arylpyrazole derivatives, multivariate analyses have been performed in this work using partial least square (PLS) and artificial neural network (ANN) methods. A PLS model has been obtained and validated with 45 compounds in the training set and 13 compounds in the test set (r 2 training = 0.761, q 2 = 0.656, r 2 test = 0.746, MSE test = 0.132 and MAE test = 0.258). Additionally, multi-layer perceptron ANNs (MLP-ANNs) were employed in order to propose non-linear models trained by gradient descent with momentum backpropagation function. Based on MSE test values, the best MLP-ANN models were combined in a MLP-ANN consensus model (MLP-ANN-CM; r 2 test = 0.824, MSE test = 0.088 and MAE test = 0.197). In the end, a general consensus model (GCM) has been obtained using PLS and MLP-ANN-CM models (r 2 test = 0.811, MSE test = 0.100 and MAE test = 0.218). Besides, the selected descriptors (GGI6, Mor23m, SRW06, H7m, MLOGP, and μ) revealed important features that should be considered when one is planning new compounds of the 1-arylpyrazole class. The multivariate models proposed in this work are definitely a powerful tool for the rational drug design of new compounds for neuropathic pain treatment. Graphical abstract Main scaffold of the 1-arylpyrazole derivatives and the selected descriptors.
ERIC Educational Resources Information Center
McIntyre, Patrick J.
1974-01-01
Reported is a study to verify the pattern of bias associated with the Model Identification Test and to determine its source. This instrument is a limited verbal science test designed to determine the knowledge possessed by elementary school children of selected concepts related to "the particle nature of matter." (PEB)
NASA Technical Reports Server (NTRS)
1976-01-01
Full size Tug LO2 and LH2 tank configurations were defined, based on selected tank geometries. These configurations were then locally modeled for computer stress analysis. A large subscale test tank, representing the selected Tug LO2 tank, was designed and analyzed. This tank was fabricated using procedures which represented production operations. An evaluation test program was outlined and a test procedure defined. The necessary test hardware was also fabricated.
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
NASA Astrophysics Data System (ADS)
Rahmadani, S.; Dongoran, A.; Zarlis, M.; Zakarias
2018-03-01
This paper discusses the problem of feature selection using genetic algorithms on a dataset for classification problems. The classification model used is the decicion tree (DT), and Naive Bayes. In this paper we will discuss how the Naive Bayes and Decision Tree models to overcome the classification problem in the dataset, where the dataset feature is selectively selected using GA. Then both models compared their performance, whether there is an increase in accuracy or not. From the results obtained shows an increase in accuracy if the feature selection using GA. The proposed model is referred to as GADT (GA-Decision Tree) and GANB (GA-Naive Bayes). The data sets tested in this paper are taken from the UCI Machine Learning repository.
Is it better to select or to receive? Learning via active and passive hypothesis testing.
Markant, Douglas B; Gureckis, Todd M
2014-02-01
People can test hypotheses through either selection or reception. In a selection task, the learner actively chooses observations to test his or her beliefs, whereas in reception tasks data are passively encountered. People routinely use both forms of testing in everyday life, but the critical psychological differences between selection and reception learning remain poorly understood. One hypothesis is that selection learning improves learning performance by enhancing generic cognitive processes related to motivation, attention, and engagement. Alternatively, we suggest that differences between these 2 learning modes derives from a hypothesis-dependent sampling bias that is introduced when a person collects data to test his or her own individual hypothesis. Drawing on influential models of sequential hypothesis-testing behavior, we show that such a bias (a) can lead to the collection of data that facilitates learning compared with reception learning and (b) can be more effective than observing the selections of another person. We then report a novel experiment based on a popular category learning paradigm that compares reception and selection learning. We additionally compare selection learners to a set of "yoked" participants who viewed the exact same sequence of observations under reception conditions. The results revealed systematic differences in performance that depended on the learner's role in collecting information and the abstract structure of the problem.
40 CFR 1048.405 - How does this program work?
Code of Federal Regulations, 2010 CFR
2010-07-01
... CONTROLS CONTROL OF EMISSIONS FROM NEW, LARGE NONROAD SPARK-IGNITION ENGINES Testing In-use Engines § 1048.405 How does this program work? (a) You must test in-use engines, for exhaust emissions, from the families we select. We may select up to 25 percent of your engine families in any model year—or one engine...
Earthquake and failure forecasting in real-time: A Forecasting Model Testing Centre
NASA Astrophysics Data System (ADS)
Filgueira, Rosa; Atkinson, Malcolm; Bell, Andrew; Main, Ian; Boon, Steven; Meredith, Philip
2013-04-01
Across Europe there are a large number of rock deformation laboratories, each of which runs many experiments. Similarly there are a large number of theoretical rock physicists who develop constitutive and computational models both for rock deformation and changes in geophysical properties. Here we consider how to open up opportunities for sharing experimental data in a way that is integrated with multiple hypothesis testing. We present a prototype for a new forecasting model testing centre based on e-infrastructures for capturing and sharing data and models to accelerate the Rock Physicist (RP) research. This proposal is triggered by our work on data assimilation in the NERC EFFORT (Earthquake and Failure Forecasting in Real Time) project, using data provided by the NERC CREEP 2 experimental project as a test case. EFFORT is a multi-disciplinary collaboration between Geoscientists, Rock Physicists and Computer Scientist. Brittle failure of the crust is likely to play a key role in controlling the timing of a range of geophysical hazards, such as volcanic eruptions, yet the predictability of brittle failure is unknown. Our aim is to provide a facility for developing and testing models to forecast brittle failure in experimental and natural data. Model testing is performed in real-time, verifiably prospective mode, in order to avoid selection biases that are possible in retrospective analyses. The project will ultimately quantify the predictability of brittle failure, and how this predictability scales from simple, controlled laboratory conditions to the complex, uncontrolled real world. Experimental data are collected from controlled laboratory experiments which includes data from the UCL Laboratory and from Creep2 project which will undertake experiments in a deep-sea laboratory. We illustrate the properties of the prototype testing centre by streaming and analysing realistically noisy synthetic data, as an aid to generating and improving testing methodologies in imperfect conditions. The forecasting model testing centre uses a repository to hold all the data and models and a catalogue to hold all the corresponding metadata. It allows to: Data transfer: Upload experimental data: We have developed FAST (Flexible Automated Streaming Transfer) tool to upload data from RP laboratories to the repository. FAST sets up data transfer requirements and selects automatically the transfer protocol. Metadata are automatically created and stored. Web data access: Create synthetic data: Users can choose a generator and supply parameters. Synthetic data are automatically stored with corresponding metadata. Select data and models: Search the metadata using criteria design for RP. The metadata of each data (synthetic or from laboratory) and models are well-described through their respective catalogues accessible by the web portal. Upload models: Upload and store a model with associated metadata. This provide an opportunity to share models. The web portal solicits and creates metadata describing each model. Run model and visualise results: Selected data and a model to be submitted to a High Performance Computational resource hiding technical details. Results are displayed in accelerated time and stored allowing retrieval, inspection and aggregation. The forecasting model testing centre proposed could be integrated into EPOS. Its expected benefits are: Improved the understanding of brittle failure prediction and its scalability to natural phenomena. Accelerated and extensive testing and rapid sharing of insights. Increased impact and visibility of RP and GeoScience research. Resources for education and training. A key challenge is to agree the framework for sharing RP data and models. Our work is provocative first step.
Analysis of Weibull Grading Test for Solid Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2010-01-01
Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This, model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.
Design and application of electromechanical actuators for deep space missions
NASA Technical Reports Server (NTRS)
Haskew, Tim A.; Wander, John
1993-01-01
During the period 8/16/92 through 2/15/93, work has been focused on three major topics: (1) screw modeling and testing; (2) motor selection; and (3) health monitoring and fault diagnosis. Detailed theoretical analysis has been performed to specify a full dynamic model for the roller screw. A test stand has been designed for model parameter estimation and screw testing. In addition, the test stand is expected to be used to perform a study on transverse screw loading.
A probabilistic method for testing and estimating selection differences between populations.
He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li
2015-12-01
Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.
Lessons learned from selecting and testing spaceflight potentiometers
NASA Technical Reports Server (NTRS)
Iskenderian, T.
1994-01-01
A solar array drive (SAD) was designed for operation on the TOPEX/POSEIDON spacecraft that was launched in August, 1992. The experience gained in selecting, specifying, testing to failure, and redesigning its position sensor produced valuable lessons for future component selection and qualification. Issues of spaceflight heritage, cost/benefit/risk assessment, and component specification are addressed. It was found that costly schedule and budget overruns may have been avoided if the capability of the candidate sensors to meet requirements had been more critically examined prior to freezing the design. The use of engineering models and early qualification tests is also recommended.
A lab-based study of subground passive cooling system for indoor temperature control
NASA Astrophysics Data System (ADS)
Chok, Mun-Hong; Chan, Chee-Ming
2017-11-01
Passive cooling is an alternative cooling technique which helps to reduce high energy consumption. Respectively, dredged marine soil (DMS) is either being dumped or disposed as waste materials. Dredging works had resulted high labor cost, therefore reuse DMS as to fill it along the coastal area. In this study, DMS chosen to examine the effectiveness of passive cooling system by model tests. Soil characterization were carried out according to BS1377: Part 2: 1990. Model were made into scale of 3 cm to 1 m. Heat exchange unit consists of three pipe designs namely, parallel, ramp and spiral. Preliminary tests including flow rate test and soil sample selection were done to select the best heat exchange unit to carry out the model test. Model test is classified into 2 conditions, day and night, each condition consists of 4 configurations which the temperature results are determined. The result shows that window left open and fan switched on (WO/FO) recorded the most effective cooling effects, from 29 °C to 27 °C with drop of 6.9 %.
Power Hardware-in-the-Loop Evaluation of PV Inverter Grid Support on Hawaiian Electric Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin A; Prabakar, Kumaraguru; Nagarajan, Adarsh
As more grid-connected photovoltaic (PV) inverters become compliant with evolving interconnections requirements, there is increased interest from utilities in understanding how to best deploy advanced grid-support functions (GSF) in the field. One efficient and cost-effective method to examine such deployment options is to leverage power hardware-in-the-loop (PHIL) testing methods, which combine the fidelity of hardware tests with the flexibility of computer simulation. This paper summarizes a study wherein two Hawaiian Electric feeder models were converted to real-time models using an OPAL-RT real-time digital testing platform, and integrated with models of GSF capable PV inverters based on characterization test data. Themore » integrated model was subsequently used in PHIL testing to evaluate the effects of different fixed power factor and volt-watt control settings on voltage regulation of the selected feeders using physical inverters. Selected results are presented in this paper, and complete results of this study were provided as inputs for field deployment and technical interconnection requirements for grid-connected PV inverters on the Hawaiian Islands.« less
CAT Model with Personalized Algorithm for Evaluation of Estimated Student Knowledge
ERIC Educational Resources Information Center
Andjelic, Svetlana; Cekerevac, Zoran
2014-01-01
This article presents the original model of the computer adaptive testing and grade formation, based on scientifically recognized theories. The base of the model is a personalized algorithm for selection of questions depending on the accuracy of the answer to the previous question. The test is divided into three basic levels of difficulty, and the…
Testing the Cultural Differences of School Characteristics with Measurement Invariance
ERIC Educational Resources Information Center
Demir, Ergül
2016-01-01
In this study, it was aimed to model the school characteristics in multivariate structure, and according to this model, aimed to test the invariance of this model across five randomly selected countries and economies from PISA 2012 sample. It is thought that significant differences across group in the context of school characteristics have the…
A Bayesian Hierarchical Selection Model for Academic Growth with Missing Data
ERIC Educational Resources Information Center
Allen, Jeff
2017-01-01
Using a sample of schools testing annually in grades 9-11 with a vertically linked series of assessments, a latent growth curve model is used to model test scores with student intercepts and slopes nested within school. Missed assessments can occur because of student mobility, student dropout, absenteeism, and other reasons. Missing data…
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
Antonello, ZA; Nucera, C
2015-01-01
Molecular signature of advanced and metastatic thyroid carcinoma involves deregulation of multiple fundamental pathways activated in the tumor microenvironment. They include BRAFV600E and AKT that affect tumor initiation, progression and metastasis. Human thyroid cancer orthotopic mouse models are based on human cell lines that generally harbor genetic alterations found in human thyroid cancers. They can reproduce in vivo and in situ (into the thyroid) many features of aggressive and refractory human advanced thyroid carcinomas, including local invasion and metastasis. Humanized orthotopic mouse models seem to be ideal and commonly used for preclinical and translational studies of compounds and therapies not only because they may mimic key aspects of human diseases (e.g. metastasis), but also for their reproducibility. In addition, they might provide the possibility to evaluate systemic effects of treatments. So far, human thyroid cancer in vivo models were mainly used to test single compounds, non selective and selective. Despite the greater antitumor activity and lower toxicity obtained with different selective drugs in respect to non-selective ones, most of them are only able to delay disease progression, which ultimately could restart with similar aggressive behavior. Aggressive thyroid tumors (for example, anaplastic or poorly differentiated thyroid carcinoma) carry several complex genetic alterations that are likely cooperating to promote disease progression and might confer resistance to single-compound approaches. Orthotopic models of human thyroid cancer also hold the potential to be good models for testing novel combinatorial therapies. In this article, we will summarize results on preclinical testing of selective and nonselective single compounds in orthotopic mouse models based on validated human thyroid cancer cell lines harboring the BRAFV600E mutation or with wild-type BRAF. Furthermore, we will discuss the potential use of this model also for combinatorial approaches, which are expected to take place in the upcoming human thyroid cancer basic and clinical research. PMID:24362526
Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis
NASA Astrophysics Data System (ADS)
Kurtulus, Bedri; Flipo, Nicolas
2012-01-01
The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.
Iterative Refinement of a Binding Pocket Model: Active Computational Steering of Lead Optimization
2012-01-01
Computational approaches for binding affinity prediction are most frequently demonstrated through cross-validation within a series of molecules or through performance shown on a blinded test set. Here, we show how such a system performs in an iterative, temporal lead optimization exercise. A series of gyrase inhibitors with known synthetic order formed the set of molecules that could be selected for “synthesis.” Beginning with a small number of molecules, based only on structures and activities, a model was constructed. Compound selection was done computationally, each time making five selections based on confident predictions of high activity and five selections based on a quantitative measure of three-dimensional structural novelty. Compound selection was followed by model refinement using the new data. Iterative computational candidate selection produced rapid improvements in selected compound activity, and incorporation of explicitly novel compounds uncovered much more diverse active inhibitors than strategies lacking active novelty selection. PMID:23046104
Stephan, Wolfgang
2016-01-01
In the past 15 years, numerous methods have been developed to detect selective sweeps underlying adaptations. These methods are based on relatively simple population genetic models, including one or two loci at which positive directional selection occurs, and one or two marker loci at which the impact of selection on linked neutral variation is quantified. Information about the phenotype under selection is not included in these models (except for fitness). In contrast, in the quantitative genetic models of adaptation, selection acts on one or more phenotypic traits, such that a genotype-phenotype map is required to bridge the gap to population genetics theory. Here I describe the range of population genetic models from selective sweeps in a panmictic population of constant size to evolutionary traffic when simultaneous sweeps at multiple loci interfere, and I also consider the case of polygenic selection characterized by subtle allele frequency shifts at many loci. Furthermore, I present an overview of the statistical tests that have been proposed based on these population genetics models to detect evidence for positive selection in the genome. © 2015 John Wiley & Sons Ltd.
Force on Force Modeling with Formal Task Structures and Dynamic Geometry
2017-03-24
task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test
Michel, Sebastian; Ametz, Christian; Gungor, Huseyin; Akgöl, Batuhan; Epure, Doru; Grausgruber, Heinrich; Löschenberger, Franziska; Buerstmayr, Hermann
2017-02-01
Early generation genomic selection is superior to conventional phenotypic selection in line breeding and can be strongly improved by including additional information from preliminary yield trials. The selection of lines that enter resource-demanding multi-environment trials is a crucial decision in every line breeding program as a large amount of resources are allocated for thoroughly testing these potential varietal candidates. We compared conventional phenotypic selection with various genomic selection approaches across multiple years as well as the merit of integrating phenotypic information from preliminary yield trials into the genomic selection framework. The prediction accuracy using only phenotypic data was rather low (r = 0.21) for grain yield but could be improved by modeling genetic relationships in unreplicated preliminary yield trials (r = 0.33). Genomic selection models were nevertheless found to be superior to conventional phenotypic selection for predicting grain yield performance of lines across years (r = 0.39). We subsequently simplified the problem of predicting untested lines in untested years to predicting tested lines in untested years by combining breeding values from preliminary yield trials and predictions from genomic selection models by a heritability index. This genomic assisted selection led to a 20% increase in prediction accuracy, which could be further enhanced by an appropriate marker selection for both grain yield (r = 0.48) and protein content (r = 0.63). The easy to implement and robust genomic assisted selection gave thus a higher prediction accuracy than either conventional phenotypic or genomic selection alone. The proposed method took the complex inheritance of both low and high heritable traits into account and appears capable to support breeders in their selection decisions to develop enhanced varieties more efficiently.
NASA Astrophysics Data System (ADS)
Albertson, C. W.
1982-03-01
A 1/12th scale model of the Curved Surface Test Apparatus (CSTA), which will be used to study aerothermal loads and evaluate Thermal Protection Systems (TPS) on a fuselage-type configuration in the Langley 8-Foot High Temperature Structures Tunnel (8 ft HTST), was tested in the Langley 7-Inch Mach 7 Pilot Tunnel. The purpose of the tests was to study the overall flow characteristics and define an envelope for testing the CSTA in the 8 ft HTST. Wings were tested on the scaled CSTA model to select a wing configuration with the most favorable characteristics for conducting TPS evaluations for curved and intersecting surfaces. The results indicate that the CSTA and selected wing configuration can be tested at angles of attack up to 15.5 and 10.5 degrees, respectively. The base pressure for both models was at the expected low level for most test conditions. Results generally indicate that the CSTA and wing configuration will provide a useful test bed for aerothermal pads and thermal structural concept evaluation over a broad range of flow conditions in the 8 ft HTST.
NASA Technical Reports Server (NTRS)
Albertson, C. W.
1982-01-01
A 1/12th scale model of the Curved Surface Test Apparatus (CSTA), which will be used to study aerothermal loads and evaluate Thermal Protection Systems (TPS) on a fuselage-type configuration in the Langley 8-Foot High Temperature Structures Tunnel (8 ft HTST), was tested in the Langley 7-Inch Mach 7 Pilot Tunnel. The purpose of the tests was to study the overall flow characteristics and define an envelope for testing the CSTA in the 8 ft HTST. Wings were tested on the scaled CSTA model to select a wing configuration with the most favorable characteristics for conducting TPS evaluations for curved and intersecting surfaces. The results indicate that the CSTA and selected wing configuration can be tested at angles of attack up to 15.5 and 10.5 degrees, respectively. The base pressure for both models was at the expected low level for most test conditions. Results generally indicate that the CSTA and wing configuration will provide a useful test bed for aerothermal pads and thermal structural concept evaluation over a broad range of flow conditions in the 8 ft HTST.
Modeling Cross-Situational Word–Referent Learning: Prior Questions
Yu, Chen; Smith, Linda B.
2013-01-01
Both adults and young children possess powerful statistical computation capabilities—they can infer the referent of a word from highly ambiguous contexts involving many words and many referents by aggregating cross-situational statistical information across contexts. This ability has been explained by models of hypothesis testing and by models of associative learning. This article describes a series of simulation studies and analyses designed to understand the different learning mechanisms posited by the 2 classes of models and their relation to each other. Variants of a hypothesis-testing model and a simple or dumb associative mechanism were examined under different specifications of information selection, computation, and decision. Critically, these 3 components of the models interact in complex ways. The models illustrate a fundamental tradeoff between amount of data input and powerful computations: With the selection of more information, dumb associative models can mimic the powerful learning that is accomplished by hypothesis-testing models with fewer data. However, because of the interactions among the component parts of the models, the associative model can mimic various hypothesis-testing models, producing the same learning patterns but through different internal components. The simulations argue for the importance of a compositional approach to human statistical learning: the experimental decomposition of the processes that contribute to statistical learning in human learners and models with the internal components that can be evaluated independently and together. PMID:22229490
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Perceived peer influence and peer selection on adolescent smoking.
Hoffman, Beth R; Monge, Peter R; Chou, Chih-Ping; Valente, Thomas W
2007-08-01
Despite advances in tobacco control, adolescent smoking remains a problem. The smoking status of friends is one of the highest correlates with adolescent smoking. This homophily (commonality of friends based on a given attribute) may be due to either peer pressure, where adolescents adopt the smoking behaviors of their friends, or peer selection, where adolescents choose friends based on their smoking status. This study used structural equation modeling to test a model of peer influence and peer selection on ever smoking by adolescents. The primary analysis of the model did not reach significance, but post hoc analyses did result in a model with good fit. Results indicated that both peer influence and peer selection were occurring, and that peer influence was more salient in the population than was peer selection. Implications of these results for tobacco prevention programs are discussed.
Selection of test paths for solder joint intermittent connection faults under DC stimulus
NASA Astrophysics Data System (ADS)
Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen
2018-06-01
The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.
Detecting Directional Selection in the Presence of Recent Admixture in African-Americans
Lohmueller, Kirk E.; Bustamante, Carlos D.; Clark, Andrew G.
2011-01-01
We investigate the performance of tests of neutrality in admixed populations using plausible demographic models for African-American history as well as resequencing data from African and African-American populations. The analysis of both simulated and human resequencing data suggests that recent admixture does not result in an excess of false-positive results for neutrality tests based on the frequency spectrum after accounting for the population growth in the parental African population. Furthermore, when simulating positive selection, Tajima's D, Fu and Li's D, and haplotype homozygosity have lower power to detect population-specific selection using individuals sampled from the admixed population than from the nonadmixed population. Fay and Wu's H test, however, has more power to detect selection using individuals from the admixed population than from the nonadmixed population, especially when the selective sweep ended long ago. Our results have implications for interpreting recent genome-wide scans for positive selection in human populations. PMID:21196524
40 CFR 90.706 - Engine sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Engine sample selection. 90.706 Section...) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Manufacturer Production Line Testing Program § 90.706 Engine sample selection. (a) At the start of each model year, the small...
40 CFR 90.706 - Engine sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Engine sample selection. 90.706... (CONTINUED) CONTROL OF EMISSIONS FROM NONROAD SPARK-IGNITION ENGINES AT OR BELOW 19 KILOWATTS Manufacturer Production Line Testing Program § 90.706 Engine sample selection. (a) At the start of each model year, the...
40 CFR 91.506 - Engine sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Engine sample selection. 91.506... (CONTINUED) CONTROL OF EMISSIONS FROM MARINE SPARK-IGNITION ENGINES Manufacturer Production Line Testing Program § 91.506 Engine sample selection. (a) At the start of each model year, the marine SI engine...
The evolution of trade-offs: where are we?
Roff, D A; Fairbairn, D J
2007-03-01
Trade-offs are a core component of many evolutionary models, particularly those dealing with the evolution of life histories. In the present paper, we identify four topics of key importance for studies of the evolutionary biology of trade-offs. First, we consider the underlying concept of 'constraint'. We conclude that this term is typically used too vaguely and suggest that 'constraint' in the sense of a bias should be clearly distinguished from 'constraint' in the sense of proscribed combinations of traits or evolutionary trajectories. Secondly, we address the utility of the acquisition-allocation model (the 'Y-model'). We find that, whereas this model and its derivatives have provided new insights, a misunderstanding of the pivotal equation has led to incorrect predictions and faulty tests. Thirdly, we ask how trade-offs are expected to evolve under directional selection. A quantitative genetic model predicts that, under weak or short-term selection, the intercept will change but the slope will remain constant. Two empirical tests support this prediction but these are based on comparisons of geographic populations: more direct tests will come from artificial selection experiments. Finally, we discuss what maintains variation in trade-offs noting that at present little attention has been given to this question. We distinguish between phenotypic and genetic variation and suggest that the latter is most in need of explanation. We suggest that four factors deserving investigation are mutation-selection balance, antagonistic pleiotropy, correlational selection and spatio-temporal variation, but as in the other areas of research on trade-offs, empirical generalizations are impeded by lack of data. Although this lack is discouraging, we suggest that it provides a rich ground for further study and the integration of many disciplines, including the emerging field of genomics.
Testing and Analytical Modeling for Purging Process of a Cryogenic Line
NASA Technical Reports Server (NTRS)
Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.
2015-01-01
To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. The test article, a 3.35 m long with the diameter of 20 cm incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). Total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modeling descriptions, and the results are presented in the following sections.
Testing and Analytical Modeling for Purging Process of a Cryogenic Line
NASA Technical Reports Server (NTRS)
Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.
2013-01-01
To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. The test article, a 3.35 m long with the diameter of 20 cm incline line, was filled with liquid or gaseous hydrogen and then purged with gaseous helium (GHe). Total of 10 tests were conducted. The influences of GHe flow rates and initial temperatures were evaluated. The Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer computer program, was utilized to model and simulate selective tests. The test procedures, modeling descriptions, and the results are presented in the following sections.
ERIC Educational Resources Information Center
Flight, Ingrid H.; Wilson, Carlene J.; McGillivray, Jane; Myers, Ronald E.
2010-01-01
We investigated whether the five-factor structure of the Preventive Health Model for colorectal cancer screening, developed in the United States, has validity in Australia. We also tested extending the model with the addition of the factor Self-Efficacy to Screen using Fecal Occult Blood Test (SESFOBT). Randomly selected men and women aged between…
40 CFR 90.706 - Engine sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... = emission test result for an individual engine. x = mean of emission test results of the actual sample. FEL... test with the last test result from the previous model year and then calculate the required sample size.... Test results used to calculate the variables in the following Sample Size Equation must be final...
[Hyperspectral remote sensing image classification based on SVM optimized by clonal selection].
Liu, Qing-Jie; Jing, Lin-Hai; Wang, Meng-Fei; Lin, Qi-Zhong
2013-03-01
Model selection for support vector machine (SVM) involving kernel and the margin parameter values selection is usually time-consuming, impacts training efficiency of SVM model and final classification accuracies of SVM hyperspectral remote sensing image classifier greatly. Firstly, based on combinatorial optimization theory and cross-validation method, artificial immune clonal selection algorithm is introduced to the optimal selection of SVM (CSSVM) kernel parameter a and margin parameter C to improve the training efficiency of SVM model. Then an experiment of classifying AVIRIS in India Pine site of USA was performed for testing the novel CSSVM, as well as a traditional SVM classifier with general Grid Searching cross-validation method (GSSVM) for comparison. And then, evaluation indexes including SVM model training time, classification overall accuracy (OA) and Kappa index of both CSSVM and GSSVM were all analyzed quantitatively. It is demonstrated that OA of CSSVM on test samples and whole image are 85.1% and 81.58, the differences from that of GSSVM are both within 0.08% respectively; And Kappa indexes reach 0.8213 and 0.7728, the differences from that of GSSVM are both within 0.001; While the ratio of model training time of CSSVM and GSSVM is between 1/6 and 1/10. Therefore, CSSVM is fast and accurate algorithm for hyperspectral image classification and is superior to GSSVM.
A test for selection employing quantitative trait locus and mutation accumulation data.
Rice, Daniel P; Townsend, Jeffrey P
2012-04-01
Evolutionary biologists attribute much of the phenotypic diversity observed in nature to the action of natural selection. However, for many phenotypic traits, especially quantitative phenotypic traits, it has been challenging to test for the historical action of selection. An important challenge for biologists studying quantitative traits, therefore, is to distinguish between traits that have evolved under the influence of strong selection and those that have evolved neutrally. Most existing tests for selection employ molecular data, but selection also leaves a mark on the genetic architecture underlying a trait. In particular, the distribution of quantitative trait locus (QTL) effect sizes and the distribution of mutational effects together provide information regarding the history of selection. Despite the increasing availability of QTL and mutation accumulation data, such data have not yet been effectively exploited for this purpose. We present a model of the evolution of QTL and employ it to formulate a test for historical selection. To provide a baseline for neutral evolution of the trait, we estimate the distribution of mutational effects from mutation accumulation experiments. We then apply a maximum-likelihood-based method of inference to estimate the range of selection strengths under which such a distribution of mutations could generate the observed QTL. Our test thus represents the first integration of population genetic theory and QTL data to measure the historical influence of selection.
A chain-retrieval model for voluntary task switching.
Vandierendonck, André; Demanet, Jelle; Liefooghe, Baptist; Verbruggen, Frederick
2012-09-01
To account for the findings obtained in voluntary task switching, this article describes and tests the chain-retrieval model. This model postulates that voluntary task selection involves retrieval of task information from long-term memory, which is then used to guide task selection and task execution. The model assumes that the retrieved information consists of acquired sequences (or chains) of tasks, that selection may be biased towards chains containing more task repetitions and that bottom-up triggered repetitions may overrule the intended task. To test this model, four experiments are reported. In Studies 1 and 2, sequences of task choices and the corresponding transition sequences (task repetitions or switches) were analyzed with the help of dependency statistics. The free parameters of the chain-retrieval model were estimated on the observed task sequences and these estimates were used to predict autocorrelations of tasks and transitions. In Studies 3 and 4, sequences of hand choices and their transitions were analyzed similarly. In all studies, the chain-retrieval model yielded better fits and predictions than statistical models of event choice. In applications to voluntary task switching (Studies 1 and 2), all three parameters of the model were needed to account for the data. When no task switching was required (Studies 3 and 4), the chain-retrieval model could account for the data with one or two parameters clamped to a neutral value. Implications for our understanding of voluntary task selection and broader theoretical implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.
Lin, Chun-Yuan; Wang, Yen-Ling
2014-01-01
Checkpoint kinase 2 (Chk2) has a great effect on DNA-damage and plays an important role in response to DNA double-strand breaks and related lesions. In this study, we will concentrate on Chk2 and the purpose is to find the potential inhibitors by the pharmacophore hypotheses (PhModels), combinatorial fusion, and virtual screening techniques. Applying combinatorial fusion into PhModels and virtual screening techniques is a novel design strategy for drug design. We used combinatorial fusion to analyze the prediction results and then obtained the best correlation coefficient of the testing set (r test) with the value 0.816 by combining the Best(train)Best(test) and Fast(train)Fast(test) prediction results. The potential inhibitors were selected from NCI database by screening according to Best(train)Best(test) + Fast(train)Fast(test) prediction results and molecular docking with CDOCKER docking program. Finally, the selected compounds have high interaction energy between a ligand and a receptor. Through these approaches, 23 potential inhibitors for Chk2 are retrieved for further study.
Gregory, Simon; Patterson, Fiona; Baron, Helen; Knight, Alec; Walsh, Kieran; Irish, Bill; Thomas, Sally
2016-10-01
Increasing pressure is being placed on external accountability and cost efficiency in medical education and training internationally. We present an illustrative data analysis of the value-added of postgraduate medical education. We analysed historical selection (entry) and licensure (exit) examination results for trainees sitting the UK Membership of the Royal College of General Practitioners (MRCGP) licensing examination (N = 2291). Selection data comprised: a clinical problem solving test (CPST); a situational judgement test (SJT); and a selection centre (SC). Exit data was an applied knowledge test (AKT) from MRCGP. Ordinary least squares (OLS) regression analyses were used to model differences in attainment in the AKT based on performance at selection (the value-added score). Results were aggregated to the regional level for comparisons. We discovered significant differences in the value-added score between regional training providers. Whilst three training providers confer significant value-added, one training provider was significantly lower than would be predicted based on the attainment of trainees at selection. Value-added analysis in postgraduate medical education potentially offers useful information, although the methodology is complex, controversial, and has significant limitations. Developing models further could offer important insights to support continuous improvement in medical education in future.
Building and testing models with extended Higgs sectors
NASA Astrophysics Data System (ADS)
Ivanov, Igor P.
2017-07-01
Models with non-minimal Higgs sectors represent a mainstream direction in theoretical exploration of physics opportunities beyond the Standard Model. Extended scalar sectors help alleviate difficulties of the Standard Model and lead to a rich spectrum of characteristic collider signatures and astroparticle consequences. In this review, we introduce the reader to the world of extended Higgs sectors. Not pretending to exhaustively cover the entire body of literature, we walk through a selection of the most popular examples: the two- and multi-Higgs-doublet models, as well as singlet and triplet extensions. We will show how one typically builds models with extended Higgs sectors, describe the main goals and the challenges which arise on the way, and mention some methods to overcome them. We will also describe how such models can be tested, what are the key observables one focuses on, and illustrate the general strategy with a subjective selection of results.
ERIC Educational Resources Information Center
Cao, Yi; Lu, Ru; Tao, Wei
2014-01-01
The local item independence assumption underlying traditional item response theory (IRT) models is often not met for tests composed of testlets. There are 3 major approaches to addressing this issue: (a) ignore the violation and use a dichotomous IRT model (e.g., the 2-parameter logistic [2PL] model), (b) combine the interdependent items to form a…
Lobréaux, Stéphane; Melodelima, Christelle
2015-02-01
We tested the use of Generalized Linear Mixed Models to detect associations between genetic loci and environmental variables, taking into account the population structure of sampled individuals. We used a simulation approach to generate datasets under demographically and selectively explicit models. These datasets were used to analyze and optimize GLMM capacity to detect the association between markers and selective coefficients as environmental data in terms of false and true positive rates. Different sampling strategies were tested, maximizing the number of populations sampled, sites sampled per population, or individuals sampled per site, and the effect of different selective intensities on the efficiency of the method was determined. Finally, we apply these models to an Arabidopsis thaliana SNP dataset from different accessions, looking for loci associated with spring minimal temperature. We identified 25 regions that exhibit unusual correlations with the climatic variable and contain genes with functions related to temperature stress. Copyright © 2014 Elsevier Inc. All rights reserved.
Random forest models to predict aqueous solubility.
Palmer, David S; O'Boyle, Noel M; Glen, Robert C; Mitchell, John B O
2007-01-01
Random Forest regression (RF), Partial-Least-Squares (PLS) regression, Support Vector Machines (SVM), and Artificial Neural Networks (ANN) were used to develop QSPR models for the prediction of aqueous solubility, based on experimental data for 988 organic molecules. The Random Forest regression model predicted aqueous solubility more accurately than those created by PLS, SVM, and ANN and offered methods for automatic descriptor selection, an assessment of descriptor importance, and an in-parallel measure of predictive ability, all of which serve to recommend its use. The prediction of log molar solubility for an external test set of 330 molecules that are solid at 25 degrees C gave an r2 = 0.89 and RMSE = 0.69 log S units. For a standard data set selected from the literature, the model performed well with respect to other documented methods. Finally, the diversity of the training and test sets are compared to the chemical space occupied by molecules in the MDL drug data report, on the basis of molecular descriptors selected by the regression analysis.
Aryal, Madhava P; Nagaraja, Tavarekere N; Brown, Stephen L; Lu, Mei; Bagher-Ebadian, Hassan; Ding, Guangliang; Panda, Swayamprava; Keenan, Kelly; Cabral, Glauber; Mikkelsen, Tom; Ewing, James R
2014-10-01
The distribution of dynamic contrast-enhanced MRI (DCE-MRI) parametric estimates in a rat U251 glioma model was analyzed. Using Magnevist as contrast agent (CA), 17 nude rats implanted with U251 cerebral glioma were studied by DCE-MRI twice in a 24 h interval. A data-driven analysis selected one of three models to estimate either (1) plasma volume (vp), (2) vp and forward volume transfer constant (K(trans)) or (3) vp, K(trans) and interstitial volume fraction (ve), constituting Models 1, 2 and 3, respectively. CA distribution volume (VD) was estimated in Model 3 regions by Logan plots. Regions of interest (ROIs) were selected by model. In the Model 3 ROI, descriptors of parameter distributions--mean, median, variance and skewness--were calculated and compared between the two time points for repeatability. All distributions of parametric estimates in Model 3 ROIs were positively skewed. Test-retest differences between population summaries for any parameter were not significant (p ≥ 0.10; Wilcoxon signed-rank and paired t tests). These and similar measures of parametric distribution and test-retest variance from other tumor models can be used to inform the choice of biomarkers that best summarize tumor status and treatment effects. Copyright © 2014 John Wiley & Sons, Ltd.
Wombacher, Kevin; Dai, Minhao; Matig, Jacob J; Harrington, Nancy Grant
2018-03-22
To identify salient behavioral determinants related to STI testing among college students by testing a model based on the integrative model of behavioral (IMBP) prediction. 265 undergraduate students from a large university in the Southeastern US. Formative and survey research to test an IMBP-based model that explores the relationships between determinants and STI testing intention and behavior. Results of path analyses supported a model in which attitudinal beliefs predicted intention and intention predicted behavior. Normative beliefs and behavioral control beliefs were not significant in the model; however, select individual normative and control beliefs were significantly correlated with intention and behavior. Attitudinal beliefs are the strongest predictor of STI testing intention and behavior. Future efforts to increase STI testing rates should identify and target salient attitudinal beliefs.
NASA Astrophysics Data System (ADS)
Alipour, M. H.; Kibler, Kelly M.
2018-02-01
A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.
10 CFR 429.110 - Enforcement testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... DOE has reason to believe that a basic model is not in compliance it may test for enforcement. (2) DOE will select and test units pursuant to paragraphs (c) and (e) of this section. (3) Testing will be... lab is accredited to ISO/IEC 17025:2005(E) and DOE representatives witness the testing. (b) Test...
10 CFR 429.110 - Enforcement testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... DOE has reason to believe that a basic model is not in compliance it may test for enforcement. (2) DOE will select and test units pursuant to paragraphs (c) and (e) of this section. (3) Testing will be... lab is accredited to ISO/IEC 17025:2005(E) and DOE representatives witness the testing. (b) Test...
10 CFR 429.110 - Enforcement testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... DOE has reason to believe that a basic model is not in compliance it may test for enforcement. (2) DOE will select and test units pursuant to paragraphs (c) and (e) of this section. (3) Testing will be... lab is accredited to ISO/IEC 17025:2005(E) and DOE representatives witness the testing. (b) Test...
Design, analysis and test verification of advanced encapsulation systems
NASA Technical Reports Server (NTRS)
Garcia, A., III; Kallis, J. M.; Trucker, D. C.
1983-01-01
Analytical models were developed to perform optical, thermal, electrical and structural analyses on candidate encapsulation systems. From these analyses several candidate encapsulation systems were selected for qualification testing.
Testing and Analytical Modeling for Purging Process of a Cryogenic Line
NASA Technical Reports Server (NTRS)
Hedayat, A.; Mazurkivich, P. V.; Nelson, M. A.; Majumdar, A. K.
2015-01-01
To gain confidence in developing analytical models of the purging process for the cryogenic main propulsion systems of upper stage, two test series were conducted. Test article, a 3.35m long with the diameter of 20 cm incline line, was filled with liquid (LH2)or gaseous hydrogen (GH2) and then purged with gaseous helium (GHe). Total of 10 tests were conducted. Influences of GHe flow rates and initial temperatures were evaluated. Generalized Fluid System Simulation Program (GFSSP), an in-house general-purpose fluid system analyzer, was utilized to model and simulate selective tests.
ERIC Educational Resources Information Center
Matton, Nadine; Vautier, Stephane; Raufaste, Eric
2009-01-01
Mean gain scores for cognitive ability tests between two sessions in a selection setting are now a robust finding, yet not fully understood. Many authors do not attribute such gain scores to an increase in the target abilities. Our approach consists of testing a longitudinal SEM model suitable to this view. We propose to model the scores' changes…
Dragovic, Sanja; Vermeulen, Nico P E; Gerets, Helga H; Hewitt, Philip G; Ingelman-Sundberg, Magnus; Park, B Kevin; Juhila, Satu; Snoeys, Jan; Weaver, Richard J
2016-12-01
The current test systems employed by pharmaceutical industry are poorly predictive for drug-induced liver injury (DILI). The 'MIP-DILI' project addresses this situation by the development of innovative preclinical test systems which are both mechanism-based and of physiological, pharmacological and pathological relevance to DILI in humans. An iterative, tiered approach with respect to test compounds, test systems, bioanalysis and systems analysis is adopted to evaluate existing models and develop new models that can provide validated test systems with respect to the prediction of specific forms of DILI and further elucidation of mechanisms. An essential component of this effort is the choice of compound training set that will be used to inform refinement and/or development of new model systems that allow prediction based on knowledge of mechanisms, in a tiered fashion. In this review, we focus on the selection of MIP-DILI training compounds for mechanism-based evaluation of non-clinical prediction of DILI. The selected compounds address both hepatocellular and cholestatic DILI patterns in man, covering a broad range of pharmacologies and chemistries, and taking into account available data on potential DILI mechanisms (e.g. mitochondrial injury, reactive metabolites, biliary transport inhibition, and immune responses). Known mechanisms by which these compounds are believed to cause liver injury have been described, where many if not all drugs in this review appear to exhibit multiple toxicological mechanisms. Thus, the training compounds selection offered a valuable tool to profile DILI mechanisms and to interrogate existing and novel in vitro systems for the prediction of human DILI.
Balancing Selection and Its Effects on Sequences in Nearby Genome Regions
Charlesworth, Deborah
2006-01-01
Our understanding of balancing selection is currently becoming greatly clarified by new sequence data being gathered from genes in which polymorphisms are known to be maintained by selection. The data can be interpreted in conjunction with results from population genetics models that include recombination between selected sites and nearby neutral marker variants. This understanding is making possible tests for balancing selection using molecular evolutionary approaches. Such tests do not necessarily require knowledge of the functional types of the different alleles at a locus, but such information, as well as information about the geographic distribution of alleles and markers near the genes, can potentially help towards understanding what form of balancing selection is acting, and how long alleles have been maintained. PMID:16683038
Billiard, Sylvain; Castric, Vincent; Vekemans, Xavier
2007-03-01
We developed a general model of sporophytic self-incompatibility under negative frequency-dependent selection allowing complex patterns of dominance among alleles. We used this model deterministically to investigate the effects on equilibrium allelic frequencies of the number of dominance classes, the number of alleles per dominance class, the asymmetry in dominance expression between pollen and pistil, and whether selection acts on male fitness only or both on male and on female fitnesses. We show that the so-called "recessive effect" occurs under a wide variety of situations. We found emerging properties of finite population models with several alleles per dominance class such as that higher numbers of alleles are maintained in more dominant classes and that the number of dominance classes can evolve. We also investigated the occurrence of homozygous genotypes and found that substantial proportions of those can occur for the most recessive alleles. We used the model for two species with complex dominance patterns to test whether allelic frequencies in natural populations are in agreement with the distribution predicted by our model. We suggest that the model can be used to test explicitly for additional, allele-specific, selective forces.
Shi, Xiaohu; Zhang, Jingfen; He, Zhiquan; Shang, Yi; Xu, Dong
2011-09-01
One of the major challenges in protein tertiary structure prediction is structure quality assessment. In many cases, protein structure prediction tools generate good structural models, but fail to select the best models from a huge number of candidates as the final output. In this study, we developed a sampling-based machine-learning method to rank protein structural models by integrating multiple scores and features. First, features such as predicted secondary structure, solvent accessibility and residue-residue contact information are integrated by two Radial Basis Function (RBF) models trained from different datasets. Then, the two RBF scores and five selected scoring functions developed by others, i.e., Opus-CA, Opus-PSP, DFIRE, RAPDF, and Cheng Score are synthesized by a sampling method. At last, another integrated RBF model ranks the structural models according to the features of sampling distribution. We tested the proposed method by using two different datasets, including the CASP server prediction models of all CASP8 targets and a set of models generated by our in-house software MUFOLD. The test result shows that our method outperforms any individual scoring function on both best model selection, and overall correlation between the predicted ranking and the actual ranking of structural quality.
James D. Wickham; Robert V. O' Neill; Kurt H. Riitters; Timothy G. Wade; K. Bruce Jones
1997-01-01
Calculation of landscape metrics from land-cover data is becoming increasingly common. Some studies have shown that these measurements are sensitive to differences in land-cover composition, but none are known to have tested also their a sensitivity to land-cover misclassification. An error simulation model was written to test the sensitivity of selected land-scape...
An Actuarial Model for Selecting Participants for a Special Medical Education Program.
ERIC Educational Resources Information Center
Walker-Bartnick, Leslie; And Others
An actuarial model applied to the selection process of a special medical school program at the University of Maryland School of Medicine was tested. The 77 students in the study sample were admitted to the university's Fifth Pathway Program, which is designed for U.S. citizens who completed their medical school training, except for internship and…
Testing and selection of cosmological models with (1+z){sup 6} corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szydlowski, Marek; Marc Kac Complex Systems Research Centre, Jagiellonian University, ul. Reymonta 4, 30-059 Cracow; Godlowski, Wlodzimierz
2008-02-15
In the paper we check whether the contribution of (-)(1+z){sup 6} type in the Friedmann equation can be tested. We consider some astronomical tests to constrain the density parameters in such models. We describe different interpretations of such an additional term: geometric effects of loop quantum cosmology, effects of braneworld cosmological models, nonstandard cosmological models in metric-affine gravity, and models with spinning fluid. Kinematical (or geometrical) tests based on null geodesics are insufficient to separate individual matter components when they behave like perfect fluid and scale in the same way. Still, it is possible to measure their overall effect. Wemore » use recent measurements of the coordinate distances from the Fanaroff-Riley type IIb radio galaxy data, supernovae type Ia data, baryon oscillation peak and cosmic microwave background radiation observations to obtain stronger bounds for the contribution of the type considered. We demonstrate that, while {rho}{sup 2} corrections are very small, they can be tested by astronomical observations--at least in principle. Bayesian criteria of model selection (the Bayesian factor, AIC, and BIC) are used to check if additional parameters are detectable in the present epoch. As it turns out, the {lambda}CDM model is favored over the bouncing model driven by loop quantum effects. Or, in other words, the bounds obtained from cosmography are very weak, and from the point of view of the present data this model is indistinguishable from the {lambda}CDM one.« less
Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph
2017-01-01
In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players (n = 68) in 2001 completing a battery of general and sport-specific tests of handball ‘talent’ and performance. In Phase 2, national and regional coaches (n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players (n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport. PMID:28744238
Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph
2017-01-01
In most sports, the development of elite athletes is a long-term process of talent identification and support. Typically, talent selection systems administer a multi-faceted strategy including national coach observations and varying physical and psychological tests when deciding who is chosen for talent development. The aim of this exploratory study was to evaluate the prognostic validity of talent selections by varying groups 10 years after they had been conducted. This study used a unique, multi-phased approach. Phase 1 involved players ( n = 68) in 2001 completing a battery of general and sport-specific tests of handball 'talent' and performance. In Phase 2, national and regional coaches ( n = 7) in 2001 who attended training camps identified the most talented players. In Phase 3, current novice and advanced handball players ( n = 12 in each group) selected the most talented from short videos of matches played during the talent camp. Analyses compared predictions among all groups with a best model-fit derived from the motor tests. Results revealed little difference between regional and national coaches in the prediction of future performance and little difference in forecasting performance between novices and players. The best model-fit regression by the motor-tests outperformed all predictions. While several limitations are discussed, this study is a useful starting point for future investigations considering athlete selection decisions in talent identification in sport.
Fouad, Marwa A; Tolba, Enas H; El-Shal, Manal A; El Kerdawy, Ahmed M
2018-05-11
The justified continuous emerging of new β-lactam antibiotics provokes the need for developing suitable analytical methods that accelerate and facilitate their analysis. A face central composite experimental design was adopted using different levels of phosphate buffer pH, acetonitrile percentage at zero time and after 15 min in a gradient program to obtain the optimum chromatographic conditions for the elution of 31 β-lactam antibiotics. Retention factors were used as the target property to build two QSRR models utilizing the conventional forward selection and the advanced nature-inspired firefly algorithm for descriptor selection, coupled with multiple linear regression. The obtained models showed high performance in both internal and external validation indicating their robustness and predictive ability. Williams-Hotelling test and student's t-test showed that there is no statistical significant difference between the models' results. Y-randomization validation showed that the obtained models are due to significant correlation between the selected molecular descriptors and the analytes' chromatographic retention. These results indicate that the generated FS-MLR and FFA-MLR models are showing comparable quality on both the training and validation levels. They also gave comparable information about the molecular features that influence the retention behavior of β-lactams under the current chromatographic conditions. We can conclude that in some cases simple conventional feature selection algorithm can be used to generate robust and predictive models comparable to that are generated using advanced ones. Copyright © 2018 Elsevier B.V. All rights reserved.
Coupling Spatiotemporal Community Assembly Processes to Changes in Microbial Metabolism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Crump, Alex R.; Resch, Charles T.
Community assembly processes govern shifts in species abundances in response to environmental change, yet our understanding of assembly remains largely decoupled from ecosystem function. Here, we test hypotheses regarding assembly and function across space and time using hyporheic microbial communities as a model system. We pair sampling of two habitat types through hydrologic fluctuation with null modeling and multivariate statistics. We demonstrate that dual selective pressures assimilate to generate compositional changes at distinct timescales among habitat types, resulting in contrasting associations of Betaproteobacteria and Thaumarchaeota with selection and with seasonal changes in aerobic metabolism. Our results culminate in a conceptualmore » model in which selection from contrasting environments regulates taxon abundance and ecosystem function through time, with increases in function when oscillating selection opposes stable selective pressures. Our model is applicable within both macrobial and microbial ecology and presents an avenue for assimilating community assembly processes into predictions of ecosystem function.« less
A Path to an Instructional Science: Data-Generated vs. Postulated Models
ERIC Educational Resources Information Center
Gropper, George L.
2016-01-01
Psychological testing can serve as a prototype on which to base a data-generated approach to instructional design. In "testing batteries" tests are used to predict achievement. In the proposed approach batteries of prescriptions would be used to produce achievement. In creating "test batteries" tests are selected for their…
Ice Accretions and Icing Effects for Modern Airfoils
NASA Technical Reports Server (NTRS)
Addy, Harold E., Jr.
2000-01-01
Icing tests were conducted to document ice shapes formed on three different two-dimensional airfoils and to study the effects of the accreted ice on aerodynamic performance. The models tested were representative of airfoil designs in current use for each of the commercial transport, business jet, and general aviation categories of aircraft. The models were subjected to a range of icing conditions in an icing wind tunnel. The conditions were selected primarily from the Federal Aviation Administration's Federal Aviation Regulations 25 Appendix C atmospheric icing conditions. A few large droplet icing conditions were included. To verify the aerodynamic performance measurements, molds were made of selected ice shapes formed in the icing tunnel. Castings of the ice were made from the molds and placed on a model in a dry, low-turbulence wind tunnel where precision aerodynamic performance measurements were made. Documentation of all the ice shapes and the aerodynamic performance measurements made during the icing tunnel tests is included in this report. Results from the dry, low-turbulence wind tunnel tests are also presented.
How motivation affects academic performance: a structural equation modelling analysis.
Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G
2013-03-01
Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.
Uniting statistical and individual-based approaches for animal movement modelling.
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
Selecting Meteorological Input for the Global Modeling Initiative Assessments
NASA Technical Reports Server (NTRS)
Strahan, Susan; Douglass, Anne; Prather, Michael; Coy, Larry; Hall, Tim; Rasch, Phil; Sparling, Lynn
1999-01-01
The Global Modeling Initiative (GMI) science team has developed a three dimensional chemistry and transport model (CTM) to evaluate the impact of the exhaust of supersonic aircraft on the stratosphere. An important goal of the GMI is to test modules for numerical transport, photochemical integration, and model dynamics within a common framework. This work is focussed on the dependence of the overall assessment on the wind and temperature fields used by the CTM. Three meteorological data sets for the stratosphere were available to GMI: the National Center for Atmospheric Research Community Climate Model (CCM2), the Goddard Earth Observing System Data Assimilation System (GEOS-DAS), and the Goddard Institute for Space Studies general circulation model (GISS-2'). Objective criteria were established by the GMI team to evaluate which of these three data sets provided the best representation of trace gases in the stratosphere today. Tracer experiments were devised to test various aspects of model transport. Stratospheric measurements of long-lived trace gases were selected as a test of the CTM transport. This presentation describes the criteria used in grading the meteorological fields and the resulting choice of wind fields to be used in the GMI assessment. This type of objective model evaluation will lead to a higher level of confidence in these assessments. We suggest that the diagnostic tests shown here be used to augment traditional general circulation model evaluation methods.
A comparative study of turbulence models in predicting hypersonic inlet flows
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh
1993-01-01
A computational study has been conducted to evaluate the performance of various turbulence models. The NASA P8 inlet, which represents cruise condition of a typical hypersonic air-breathing vehicle, was selected as a test case for the study; the PARC2D code, which solves the full two dimensional Reynolds-averaged Navier-Stokes equations, was used. Results are presented for a total of six versions of zero- and two-equation turbulence models. Zero-equation models tested are the Baldwin-Lomax model, the Thomas model, and a combination of the two. Two-equation models tested are low-Reynolds number models (the Chien model and the Speziale model) and a high-Reynolds number model (the Launder and Spalding model).
40 CFR 86.079-31 - Separate certification.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-Duty Engines, and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied... certification of part of his product line. The selection of test vehicles (or test engines) and the computation...
40 CFR 86.079-31 - Separate certification.
Code of Federal Regulations, 2011 CFR
2011-07-01
...-Duty Engines, and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied... certification of part of his product line. The selection of test vehicles (or test engines) and the computation...
Informing Selection of Nanomaterial Concentrations for ...
Little justification is generally provided for selection of in vitro assay testing concentrations for engineered nanomaterials (ENMs). Selection of concentration levels for hazard evaluation based on real-world exposure scenarios is desirable. We reviewed published ENM concentrations measured in air in manufacturing and R&D labs to identify input levels for estimating ENM mass retained in the human lung using the Multiple-Path Particle Dosimetry (MPPD) model. Model input parameters were individually varied to estimate alveolar mass retained for different particle sizes (5-1000 nm), aerosol concentrations (0.1, 1 mg/m3), aspect ratios (2, 4, 10, 167), and exposure durations (24 hours and a working lifetime). The calculated lung surface concentrations were then converted to in vitro solution concentrations. Modeled alveolar mass retained after 24 hours is most affected by activity level and aerosol concentration. Alveolar retention for Ag and TiO2 nanoparticles and CNTs for a working lifetime (45 years) exposure duration is similar to high-end concentrations (~ 30-400 μg/mL) typical of in vitro testing reported in the literature. Analyses performed are generally applicable to provide ENM testing concentrations for in vitro hazard screening studies though further research is needed to improve the approach. Understanding the relationship between potential real-world exposures and in vitro test concentrations will facilitate interpretation of toxicological results
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
Variable selection for marginal longitudinal generalized linear models.
Cantoni, Eva; Flemming, Joanna Mills; Ronchetti, Elvezio
2005-06-01
Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).
Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2010-01-01
Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.
Kothe, Christian; Hissbach, Johanna; Hampe, Wolfgang
2013-01-01
Introduction: The present study examines the question whether the selection of dental students should be based solely on average school-leaving grades (GPA) or whether it could be improved by using a subject-specific aptitude test. Methods: The HAM-Nat Natural Sciences Test was piloted with freshmen during their first study week in 2006 and 2007. In 2009 and 2010 it was used in the dental student selection process. The sample size in the regression models varies between 32 and 55 students. Results: Used as a supplement to the German GPA, the HAM-Nat test explained up to 12% of the variance in preclinical examination performance. We confirmed the prognostic validity of GPA reported in earlier studies in some, but not all of the individual preclinical examination results. Conclusion: The HAM-Nat test is a reliable selection tool for dental students. Use of the HAM-Nat yielded a significant improvement in prediction of preclinical academic success in dentistry. PMID:24282449
Hissbach, Johanna; Feddersen, Lena; Sehner, Susanne; Hampe, Wolfgang
2012-01-01
Aims: Tests with natural-scientific content are predictive of the success in the first semesters of medical studies. Some universities in the German speaking countries use the ‘Test for medical studies’ (TMS) for student selection. One of its test modules, namely “medical and scientific comprehension”, measures the ability for deductive reasoning. In contrast, the Hamburg Assessment Test for Medicine, Natural Sciences (HAM-Nat) evaluates knowledge in natural sciences. In this study the predictive power of the HAM-Nat test will be compared to that of the NatDenk test, which is similar to the TMS module “medical and scientific comprehension” in content and structure. Methods: 162 medical school beginners volunteered to complete either the HAM-Nat (N=77) or the NatDenk test (N=85) in 2007. Until spring 2011, 84.2% of these successfully completed the first part of the medical state examination in Hamburg. Via different logistic regression models we tested the predictive power of high school grade point average (GPA or “Abiturnote”) and the test results (HAM-Nat and NatDenk) with regard to the study success criterion “first part of the medical state examination passed successfully up to the end of the 7th semester” (Success7Sem). The Odds Ratios (OR) for study success are reported. Results: For both test groups a significant correlation existed between test results and study success (HAM-Nat: OR=2.07; NatDenk: OR=2.58). If both admission criteria are estimated in one model, the main effects (GPA: OR=2.45; test: OR=2.32) and their interaction effect (OR=1.80) are significant in the HAM-Nat test group, whereas in the NatDenk test group only the test result (OR=2.21) significantly contributes to the variance explained. Conclusions: On their own both HAM-Nat and NatDenk have predictive power for study success, but only the HAM-Nat explains additional variance if combined with GPA. The selection according to HAM-Nat and GPA has under the current circumstances of medical school selection (many good applicants and only a limited number of available spaces) the highest predictive power of all models. PMID:23255967
Currently, little justification is provided for nanomaterial testing concentrations in in vitro assays. The in vitro concentrations typically used may be higher than those experienced by exposed humans. Selection of concentration levels for hazard evaluation based on real-world e...
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
Goodwin, Laura; Fairclough, Stephen H; Poole, Helen M
2013-06-01
Kolk et al.'s model of symptom perception underlines the effects of trait negative affect, selective attention and external stressors. The current study tested this model in 263 males and 498 females from an occupational sample. Trait negative affect was associated with symptom reporting in females only, and selective attention and psychological job demands were associated with symptom reporting in both genders. Health anxiety was associated with symptom reporting in males only. Future studies might consider the inclusion of selective attention, which was more strongly associated with symptom reporting than negative affect. Psychological job demands appear to influence symptom reporting in both males and females.
A study for development of aerothermodynamic test model materials and fabrication technique
NASA Technical Reports Server (NTRS)
Dean, W. G.; Connor, L. E.
1972-01-01
A literature survey, materials reformulation and tailoring, fabrication problems, and materials selection and evaluation for fabricating models to be used with the phase-change technique for obtaining quantitative aerodynamic heat transfer data are presented. The study resulted in the selection of two best materials, stycast 2762 FT, and an alumina ceramic. Characteristics of these materials and detailed fabrication methods are presented.
Identifying and Modeling Dynamic Preference Evolution in Multipurpose Water Resources Systems
NASA Astrophysics Data System (ADS)
Mason, E.; Giuliani, M.; Castelletti, A.; Amigoni, F.
2018-04-01
Multipurpose water systems are usually operated on a tradeoff of conflicting operating objectives. Under steady state climatic and socioeconomic conditions, such tradeoff is supposed to represent a fair and/or efficient preference. Extreme variability in external forcing might affect water operators' risk aversion and force a change in her/his preference. Properly accounting for these shifts is key to any rigorous retrospective assessment of the operator's behaviors, and to build descriptive models for projecting the future system evolution. In this study, we explore how the selection of different preferences is linked to variations in the external forcing. We argue that preference selection evolves according to recent, extreme variations in system performance: underperforming in one of the objectives pushes the preference toward the harmed objective. To test this assumption, we developed a rational procedure to simulate the operator's preference selection. We map this selection onto a multilateral negotiation, where multiple virtual agents independently optimize different objectives. The agents periodically negotiate a compromise policy for the operation of the system. Agents' attitudes in each negotiation step are determined by the recent system performance measured by the specific objective they maximize. We then propose a numerical model of preference dynamics that implements a concept from cognitive psychology, the availability bias. We test our modeling framework on a synthetic lake operated for flood control and water supply. Results show that our model successfully captures the operator's preference selection and dynamic evolution driven by extreme wet and dry situations.
Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT. PMID:24586971
Lee, Tsair-Fwu; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan
2014-01-01
The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3(+) xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R(2), chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R(2) was satisfactory and corresponded well with the expected values. Multivariate NTCP models with LASSO can be used to predict patient-rated xerostomia after IMRT.
Numerical Simulation of Selecting Model Scale of Cable in Wind Tunnel Test
NASA Astrophysics Data System (ADS)
Huang, Yifeng; Yang, Jixin
The numerical simulation method based on computational Fluid Dynamics (CFD) provides a possible alternative means of physical wind tunnel test. Firstly, the correctness of the numerical simulation method is validated by one certain example. In order to select the minimum length of the cable as to a certain diameter in the numerical wind tunnel tests, the numerical wind tunnel tests based on CFD are carried out on the cables with several different length-diameter ratios (L/D). The results show that, when the L/D reaches to 18, the drag coefficient is stable essentially.
Roff, Derek A; Mostowy, Serge; Fairbairn, Daphne J
2002-01-01
The concept of phenotypic trade-offs is a central element in evolutionary theory. In general, phenotypic models assume a fixed trade-off function, whereas quantitative genetic theory predicts that the trade-off function will change as a result of selection. For a linear trade-off function selection will readily change the intercept but will have to be relatively stronger to change the slope. We test these predictions by examining the trade-off between fecundity and flight capability, as measured by dorso-longitudinal muscle mass, in four different populations of the sand cricket, Gryllus firmus. Three populations were recently derived from the wild, and the fourth had been in the laboratory for 19 years. We hypothesized that the laboratory population had most likely undergone more and different selection from the three wild populations and therefore should differ from these in respect to both slope and intercept. Because of geographic variation in selection, we predicted a general difference in intercept among the four populations. We further tested the hypothesis that this intercept will be correlated with proportion macropterous and that this relationship will itself vary with environmental conditions experienced during both the nymphal and adult period. Observed variation in the phenotypic trade-off was consistent with the predictions of the quantitative genetic model. These results point to the importance of modeling trade-offs as dynamic rather than static relationships. We discuss how phenotypic models can incorporate such variation. The phenotypic trade-off between fecundity and dorso-longitudinal muscle mass is determined in part by variation in body size, illustrating the necessity of considering trade-offs to be multi factorial rather than simply bivariate relationships.
40 CFR 86.1905 - How does this program work?
Code of Federal Regulations, 2011 CFR
2011-07-01
... least in part on the Phase 1 or Phase 2 testing outcomes described in § 86.1915. (2) The engine family... this section. We may select an engine family from the current model year or any previous model year... months longer to complete Phase 2 testing if there is a reasonable basis for needing more time. In very...
Modeling Fear of Crime in Dallas Neighborhoods: A Test of Social Capital Theory
ERIC Educational Resources Information Center
Ferguson, Kristin M.; Mindel, Charles H.
2007-01-01
This study tested a model of the effects of different predictors on individuals' levels of fear of crime in Dallas neighborhoods. Given its dual focus on individual perceptions and community-level interactions, social capital theory was selected as the most appropriate framework to explore fear of crime within the neighborhood milieu. A structural…
Detecting directional selection in the presence of recent admixture in African-Americans.
Lohmueller, Kirk E; Bustamante, Carlos D; Clark, Andrew G
2011-03-01
We investigate the performance of tests of neutrality in admixed populations using plausible demographic models for African-American history as well as resequencing data from African and African-American populations. The analysis of both simulated and human resequencing data suggests that recent admixture does not result in an excess of false-positive results for neutrality tests based on the frequency spectrum after accounting for the population growth in the parental African population. Furthermore, when simulating positive selection, Tajima's D, Fu and Li's D, and haplotype homozygosity have lower power to detect population-specific selection using individuals sampled from the admixed population than from the nonadmixed population. Fay and Wu's H test, however, has more power to detect selection using individuals from the admixed population than from the nonadmixed population, especially when the selective sweep ended long ago. Our results have implications for interpreting recent genome-wide scans for positive selection in human populations. © 2011 by the Genetics Society of America
Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.
Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu
2016-08-01
This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Testes Mass, but Not Sperm Length, Increases with Higher Levels of Polyandry in an Ancient Sex Model
Vrech, David E.; Olivero, Paola A.; Mattoni, Camilo I.; Peretti, Alfredo V.
2014-01-01
There is strong evidence that polyandrous taxa have evolved relatively larger testes than monogamous relatives. Sperm size may either increase or decrease across species with the risk or intensity of sperm competition. Scorpions represent an ancient direct mode with spermatophore-mediated sperm transfer and are particularly well suited for studies in sperm competition. This work aims to analyze for the first time the variables affecting testes mass, ejaculate volume and sperm length, according with their levels of polyandry, in species belonging to the Neotropical family Bothriuridae. Variables influencing testes mass and sperm length were obtained by model selection analysis using corrected Akaike Information Criterion. Testes mass varied greatly among the seven species analyzed, ranging from 1.6±1.1 mg in Timogenes dorbignyi to 16.3±4.5 mg in Brachistosternus pentheri with an average of 8.4±5.0 mg in all the species. The relationship between testes mass and body mass was not significant. Body allocation in testes mass, taken as Gonadosomatic Index, was high in Bothriurus cordubensis and Brachistosternus ferrugineus and low in Timogenes species. The best-fitting model for testes mass considered only polyandry as predictor with a positive influence. Model selection showed that body mass influenced sperm length negatively but after correcting for body mass, none of the variables analyzed explained sperm length. Both body mass and testes mass influenced spermatophore volume positively. There was a strong phylogenetic effect on the model containing testes mass. As predicted by the sperm competition theory and according to what happens in other arthropods, testes mass increased in species with higher levels of sperm competition, and influenced positively spermatophore volume, but data was not conclusive for sperm length. PMID:24736525
Pharmacophore Modelling and Synthesis of Quinoline-3-Carbohydrazide as Antioxidants
El Bakkali, Mustapha; Ismaili, Lhassane; Tomassoli, Isabelle; Nicod, Laurence; Pudlo, Marc; Refouvelet, Bernard
2011-01-01
From well-known antioxidants agents, we developed a first pharmacophore model containing four common chemical features: one aromatic ring and three hydrogen bond acceptors. This model served as a template in virtual screening of Maybridge and NCI databases that resulted in selection of sixteen compounds. The selected compounds showed a good antioxidant activity measured by three chemical tests: DPPH radical, OH° radical, and superoxide radical scavenging. New synthetic compounds with a good correlation with the model were prepared, and some of them presented a good antioxidant activity. PMID:25954520
Possibilities of rock constitutive modelling and simulations
NASA Astrophysics Data System (ADS)
Baranowski, Paweł; Małachowski, Jerzy
2018-01-01
The paper deals with a problem of rock finite element modelling and simulation. The main intention of authors was to present possibilities of different approaches in case of rock constitutive modelling. For this purpose granite rock was selected, due to its wide mechanical properties recognition and prevalence in literature. Two significantly different constitutive material models were implemented to simulate the granite fracture in various configurations: Johnson - Holmquist ceramic model which is very often used for predicting rock and other brittle materials behavior, and a simple linear elastic model with a brittle failure which can be used for simulating glass fracturing. Four cases with different loading conditions were chosen to compare the aforementioned constitutive models: uniaxial compression test, notched three-point-bending test, copper ball impacting a block test and small scale blasting test.
Improving Conceptual Understanding and Representation Skills Through Excel-Based Modeling
NASA Astrophysics Data System (ADS)
Malone, Kathy L.; Schunn, Christian D.; Schuchardt, Anita M.
2018-02-01
The National Research Council framework for science education and the Next Generation Science Standards have developed a need for additional research and development of curricula that is both technologically model-based and includes engineering practices. This is especially the case for biology education. This paper describes a quasi-experimental design study to test the effectiveness of a model-based curriculum focused on the concepts of natural selection and population ecology that makes use of Excel modeling tools (Modeling Instruction in Biology with Excel, MBI-E). The curriculum revolves around the bio-engineering practice of controlling an invasive species. The study takes place in the Midwest within ten high schools teaching a regular-level introductory biology class. A post-test was designed that targeted a number of common misconceptions in both concept areas as well as representational usage. The results of a post-test demonstrate that the MBI-E students significantly outperformed the traditional classes in both natural selection and population ecology concepts, thus overcoming a number of misconceptions. In addition, implementing students made use of more multiple representations as well as demonstrating greater fascination for science.
NASA Astrophysics Data System (ADS)
Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya
2012-05-01
Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.
Fang, Xingang; Bagui, Sikha; Bagui, Subhash
2017-08-01
The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.
10 CFR 431.325 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.325 Units to be tested. For each basic model of metal halide lamp ballast selected for testing, a sample of sufficient size, no less than... energy efficiency calculated as the measured output power to the lamp divided by the measured input power...
FBST for Cointegration Problems
NASA Astrophysics Data System (ADS)
Diniz, M.; Pereira, C. A. B.; Stern, J. M.
2008-11-01
In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.
Selecting the Final Model — Joinpoint Help System 4.4.0.0
Why doesn't the joinpoint program give me the best possible fit? I can see other models with more joinpoints that would fit better. Exactly how does the program decide which tests to perform and which joinpoint model is the final model?
ERIC Educational Resources Information Center
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
Fuller, Rebecca C
2009-07-01
The sensory bias model for the evolution of mating preferences states that mating preferences evolve as correlated responses to selection on nonmating behaviors sharing a common sensory system. The critical assumption is that pleiotropy creates genetic correlations that affect the response to selection. I simulated selection on populations of neural networks to test this. First, I selected for various combinations of foraging and mating preferences. Sensory bias predicts that populations with preferences for like-colored objects (red food and red mates) should evolve more readily than preferences for differently colored objects (red food and blue mates). Here, I found no evidence for sensory bias. The responses to selection on foraging and mating preferences were independent of one another. Second, I selected on foraging preferences alone and asked whether there were correlated responses for increased mating preferences for like-colored mates. Here, I found modest evidence for sensory bias. Selection for a particular foraging preference resulted in increased mating preference for similarly colored mates. However, the correlated responses were small and inconsistent. Selection on foraging preferences alone may affect initial levels of mating preferences, but these correlations did not constrain the joint evolution of foraging and mating preferences in these simulations.
Odegård, J; Klemetsdal, G; Heringstad, B
2005-04-01
Several selection criteria for reducing incidence of mastitis were developed from a random regression sire model for test-day somatic cell score (SCS). For comparison, sire transmitting abilities were also predicted based on a cross-sectional model for lactation mean SCS. Only first-crop daughters were used in genetic evaluation of SCS, and the different selection criteria were compared based on their correlation with incidence of clinical mastitis in second-crop daughters (measured as mean daughter deviations). Selection criteria were predicted based on both complete and reduced first-crop daughter groups (261 or 65 daughters per sire, respectively). For complete daughter groups, predicted transmitting abilities at around 30 d in milk showed the best predictive ability for incidence of clinical mastitis, closely followed by average predicted transmitting abilities over the entire lactation. Both of these criteria were derived from the random regression model. These selection criteria improved accuracy of selection by approximately 2% relative to a cross-sectional model. However, for reduced daughter groups, the cross-sectional model yielded increased predictive ability compared with the selection criteria based on the random regression model. This result may be explained by the cross-sectional model being more robust, i.e., less sensitive to precision of (co)variance components estimates and effects of data structure.
Test Cases for Flutter of the Benchmark Models Rectangular Wings on the Pitch and Plunge Apparatus
NASA Technical Reports Server (NTRS)
Bennett, Robert M.
2000-01-01
The supercritical airfoil was chosen as a relatively modem airfoil for comparison. The BOO12 model was tested first. Three different types of flutter instability boundaries were encountered, a classical flutter boundary, a transonic stall flutter boundary at angle of attack, and a plunge instability near M = 0.9 and for zero angle of attack. This test was made in air and was Transonic Dynamics Tunnel (TDT) Test 468. The BSCW model (for Benchmark SuperCritical Wing) was tested next as TDT Test 470. It was tested using both with air and a heavy gas, R-12, as a test medium. The effect of a transition strip on flutter was evaluated in air. The B64AOlO model was subsequently tested as TDT Test 493. Some further analysis of the experimental data for the BOO12 wing is presented. Transonic calculations using the parameters for the BOO12 wing in a two-dimensional typical section flutter analysis are given. These data are supplemented with data from the Benchmark Active Controls Technology model (BACT) given and in the next chapter of this document. The BACT model was of the same planform and airfoil as the BOO12 model, but with spoilers and a trailing edge control. It was tested in the heavy gas R-12, and was instrumented mostly at the 60 per cent span. The flutter data obtained on PAPA and the static aerodynamic test cases from BACT serve as additional data for the BOO12 model. All three types of flutter are included in the BACT Test Cases. In this report several test cases are selected to illustrate trends for a variety of different conditions with emphasis on transonic flutter. Cases are selected for classical and stall flutter for the BSCW model, for classical and plunge for the B64AOlO model, and for classical flutter for the BOO12 model. Test Cases are also presented for BSCW for static angles of attack. Only the mean pressures and the real and imaginary parts of the first harmonic of the pressures are included in the data for the test cases, but digitized time histories have been archived. The data for the test cases are available as separate electronic files. An overview of the model and tests is given, the standard formulary for these data is listed, and some sample results are presented.
Kinetic rate constant prediction supports the conformational selection mechanism of protein binding.
Moal, Iain H; Bates, Paul A
2012-01-01
The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies.
Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting.
Steele, Katie; Werndl, Charlotte
2018-06-01
This article argues that common intuitions regarding (a) the specialness of 'use-novel' data for confirmation and (b) that this specialness implies the 'no-double-counting rule', which says that data used in 'constructing' (calibrating) a model cannot also play a role in confirming the model's predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction 2 A Climate Case Study 3 The Bayesian Method vis-à-vis Intuitions 4 Classical Tests vis-à-vis Intuitions 5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases 6 Re-examining Our Case Study 7 Conclusion .
Fine-scale habitat modeling of a top marine predator: do prey data improve predictive capacity?
Torres, Leigh G; Read, Andrew J; Halpin, Patrick
2008-10-01
Predators and prey assort themselves relative to each other, the availability of resources and refuges, and the temporal and spatial scale of their interaction. Predictive models of predator distributions often rely on these relationships by incorporating data on environmental variability and prey availability to determine predator habitat selection patterns. This approach to predictive modeling holds true in marine systems where observations of predators are logistically difficult, emphasizing the need for accurate models. In this paper, we ask whether including prey distribution data in fine-scale predictive models of bottlenose dolphin (Tursiops truncatus) habitat selection in Florida Bay, Florida, U.S.A., improves predictive capacity. Environmental characteristics are often used as predictor variables in habitat models of top marine predators with the assumption that they act as proxies of prey distribution. We examine the validity of this assumption by comparing the response of dolphin distribution and fish catch rates to the same environmental variables. Next, the predictive capacities of four models, with and without prey distribution data, are tested to determine whether dolphin habitat selection can be predicted without recourse to describing the distribution of their prey. The final analysis determines the accuracy of predictive maps of dolphin distribution produced by modeling areas of high fish catch based on significant environmental characteristics. We use spatial analysis and independent data sets to train and test the models. Our results indicate that, due to high habitat heterogeneity and the spatial variability of prey patches, fine-scale models of dolphin habitat selection in coastal habitats will be more successful if environmental variables are used as predictor variables of predator distributions rather than relying on prey data as explanatory variables. However, predictive modeling of prey distribution as the response variable based on environmental variability did produce high predictive performance of dolphin habitat selection, particularly foraging habitat.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neary, Vincent Sinclair; Yang, Zhaoqing; Wang, Taiping
A wave model test bed is established to benchmark, test and evaluate spectral wave models and modeling methodologies (i.e., best practices) for predicting the wave energy resource parameters recommended by the International Electrotechnical Commission, IEC TS 62600-101Ed. 1.0 ©2015. Among other benefits, the model test bed can be used to investigate the suitability of different models, specifically what source terms should be included in spectral wave models under different wave climate conditions and for different classes of resource assessment. The overarching goal is to use these investigations to provide industry guidance for model selection and modeling best practices depending onmore » the wave site conditions and desired class of resource assessment. Modeling best practices are reviewed, and limitations and knowledge gaps in predicting wave energy resource parameters are identified.« less
Modal Survey of ETM-3, A 5-Segment Derivative of the Space Shuttle Solid Rocket Booster
NASA Technical Reports Server (NTRS)
Nielsen, D.; Townsend, J.; Kappus, K.; Driskill, T.; Torres, I.; Parks, R.
2005-01-01
The complex interactions between internal motor generated pressure oscillations and motor structural vibration modes associated with the static test configuration of a Reusable Solid Rocket Motor have potential to generate significant dynamic thrust loads in the 5-segment configuration (Engineering Test Motor 3). Finite element model load predictions for worst-case conditions were generated based on extrapolation of a previously correlated 4-segment motor model. A modal survey was performed on the largest rocket motor to date, Engineering Test Motor #3 (ETM-3), to provide data for finite element model correlation and validation of model generated design loads. The modal survey preparation included pretest analyses to determine an efficient analysis set selection using the Effective Independence Method and test simulations to assure critical test stand component loads did not exceed design limits. Historical Reusable Solid Rocket Motor modal testing, ETM-3 test analysis model development and pre-test loads analyses, as well as test execution, and a comparison of results to pre-test predictions are discussed.
Dynamic test/analysis correlation using reduced analytical models
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Angelucci, A. Filippo; Javeed, Mehzad
1992-01-01
Test/analysis correlation is an important aspect of the verification of analysis models which are used to predict on-orbit response characteristics of large space structures. This paper presents results of a study using reduced analysis models for performing dynamic test/analysis correlation. The reduced test-analysis model (TAM) has the same number and orientation of DOF as the test measurements. Two reduction methods, static (Guyan) reduction and the Improved Reduced System (IRS) reduction, are applied to the test/analysis correlation of a laboratory truss structure. Simulated test results and modal test data are used to examine the performance of each method. It is shown that selection of DOF to be retained in the TAM is critical when large structural masses are involved. In addition, the use of modal test results may provide difficulties in TAM accuracy even if a large number of DOF are retained in the TAM.
Bagdas, Deniz; Targowska-Duda, Katarzyna M.; López, Jhon J.; Perez, Edwin G.; Arias, Hugo R.; Damaj, M. Imad
2016-01-01
BACKGROUND Positive allosteric modulators (PAMs) facilitate endogenous neurotransmission and/or enhance the efficacy of agonists without directly acting on the orthosteric binding sites. In this regard, selective α7 nicotinic acetylcholine receptor type II PAMs display antinociceptive activity in rodent chronic inflammatory and neuropathic pain models. This study investigates whether 3-furan-2-yl-N-p-tolyl-acrylamide (PAM-2), a new putative α7-selective type II PAM, attenuates experimental inflammatory and neuropathic pains in mice. METHODS We tested the activity of PAM-2 after intraperitoneal administration in 3 pain assays: the carrageenan-induced inflammatory pain, the complete Freund adjuvant induced inflammatory pain, and the chronic constriction injury–induced neuropathic pain in mice. We also tested whether PAM-2 enhanced the effects of the selective α7 agonist choline in the mouse carrageenan test given intrathecally. Because the experience of pain has both sensory and affective dimensions, we also evaluated the effects of PAM-2 on acetic acid–induced aversion by using the conditioned place aversion test. RESULTS We observed that systemic administration of PAM-2 significantly reversed mechanical allodynia and thermal hyperalgesia in inflammatory and neuropathic pain models in a dose- and time-dependent manner without motor impairment. In addition, by attenuating the paw edema in inflammatory models, PAM-2 showed antiinflammatory properties. The antinociceptive effect of PAM-2 was inhibited by the selective competitive antagonist methyllycaconitine, indicating that the effect is mediated by α7 nicotinic acetylcholine receptors. Furthermore, PAM-2 enhanced the antiallodynic and antiinflammatory effects of choline, a selective α7 agonist, in the mouse carrageenan test. PAM-2 was also effective in reducing acetic acid induced aversion in the conditioned place aversion assay. CONCLUSIONS These findings suggest that the administration of PAM-2, a new α7-selective type II PAM, reduces the neuropathic and inflammatory pain sensory and affective behaviors in the mouse. Thus, this drug may have therapeutic applications in the treatment and management of chronic pain. PMID:26280585
Using the Animal Model to Accelerate Response to Selection in a Self-Pollinating Crop
Cowling, Wallace A.; Stefanova, Katia T.; Beeck, Cameron P.; Nelson, Matthew N.; Hargreaves, Bonnie L. W.; Sass, Olaf; Gilmour, Arthur R.; Siddique, Kadambot H. M.
2015-01-01
We used the animal model in S0 (F1) recurrent selection in a self-pollinating crop including, for the first time, phenotypic and relationship records from self progeny, in addition to cross progeny, in the pedigree. We tested the model in Pisum sativum, the autogamous annual species used by Mendel to demonstrate the particulate nature of inheritance. Resistance to ascochyta blight (Didymella pinodes complex) in segregating S0 cross progeny was assessed by best linear unbiased prediction over two cycles of selection. Genotypic concurrence across cycles was provided by pure-line ancestors. From cycle 1, 102/959 S0 plants were selected, and their S1 self progeny were intercrossed and selfed to produce 430 S0 and 575 S2 individuals that were evaluated in cycle 2. The analysis was improved by including all genetic relationships (with crossing and selfing in the pedigree), additive and nonadditive genetic covariances between cycles, fixed effects (cycles and spatial linear trends), and other random effects. Narrow-sense heritability for ascochyta blight resistance was 0.305 and 0.352 in cycles 1 and 2, respectively, calculated from variance components in the full model. The fitted correlation of predicted breeding values across cycles was 0.82. Average accuracy of predicted breeding values was 0.851 for S2 progeny of S1 parent plants and 0.805 for S0 progeny tested in cycle 2, and 0.878 for S1 parent plants for which no records were available. The forecasted response to selection was 11.2% in the next cycle with 20% S0 selection proportion. This is the first application of the animal model to cyclic selection in heterozygous populations of selfing plants. The method can be used in genomic selection, and for traits measured on S0-derived bulks such as grain yield. PMID:25943522
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach.
Cavagnaro, Daniel R; Gonzalez, Richard; Myung, Jay I; Pitt, Mark A
2013-02-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models.
An infrastructure to mine molecular descriptors for ligand selection on virtual screening.
Seus, Vinicius Rosa; Perazzo, Giovanni Xavier; Winck, Ana T; Werhli, Adriano V; Machado, Karina S
2014-01-01
The receptor-ligand interaction evaluation is one important step in rational drug design. The databases that provide the structures of the ligands are growing on a daily basis. This makes it impossible to test all the ligands for a target receptor. Hence, a ligand selection before testing the ligands is needed. One possible approach is to evaluate a set of molecular descriptors. With the aim of describing the characteristics of promising compounds for a specific receptor we introduce a data warehouse-based infrastructure to mine molecular descriptors for virtual screening (VS). We performed experiments that consider as target the receptor HIV-1 protease and different compounds for this protein. A set of 9 molecular descriptors are taken as the predictive attributes and the free energy of binding is taken as a target attribute. By applying the J48 algorithm over the data we obtain decision tree models that achieved up to 84% of accuracy. The models indicate which molecular descriptors and their respective values are relevant to influence good FEB results. Using their rules we performed ligand selection on ZINC database. Our results show important reduction in ligands selection to be applied in VS experiments; for instance, the best selection model picked only 0.21% of the total amount of drug-like ligands.
Measuring the Sensitivity of Single-locus “Neutrality Tests” Using a Direct Perturbation Approach
Garrigan, Daniel; Lewontin, Richard; Wakeley, John
2010-01-01
A large number of statistical tests have been proposed to detect natural selection based on a sample of variation at a single genetic locus. These tests measure the deviation of the allelic frequency distribution observed within populations from the distribution expected under a set of assumptions that includes both neutral evolution and equilibrium population demography. The present study considers a new way to assess the statistical properties of these tests of selection, by their behavior in response to direct perturbations of the steady-state allelic frequency distribution, unconstrained by any particular nonequilibrium demographic scenario. Results from Monte Carlo computer simulations indicate that most tests of selection are more sensitive to perturbations of the allele frequency distribution that increase the variance in allele frequencies than to perturbations that decrease the variance. Simulations also demonstrate that it requires, on average, 4N generations (N is the diploid effective population size) for tests of selection to relax to their theoretical, steady-state distributions following different perturbations of the allele frequency distribution to its extremes. This relatively long relaxation time highlights the fact that these tests are not robust to violations of the other assumptions of the null model besides neutrality. Lastly, genetic variation arising under an example of a regularly cycling demographic scenario is simulated. Tests of selection performed on this last set of simulated data confirm the confounding nature of these tests for the inference of natural selection, under a demographic scenario that likely holds for many species. The utility of using empirical, genomic distributions of test statistics, instead of the theoretical steady-state distribution, is discussed as an alternative for improving the statistical inference of natural selection. PMID:19744997
Selection against Heteroplasmy Explains the Evolution of Uniparental Inheritance of Mitochondria
Christie, Joshua R.; Schaerf, Timothy M.; Beekman, Madeleine
2015-01-01
Why are mitochondria almost always inherited from one parent during sexual reproduction? Current explanations for this evolutionary mystery include conflict avoidance between the nuclear and mitochondrial genomes, clearing of deleterious mutations, and optimization of mitochondrial-nuclear coadaptation. Mathematical models, however, fail to show that uniparental inheritance can replace biparental inheritance under any existing hypothesis. Recent empirical evidence indicates that mixing two different but normal mitochondrial haplotypes within a cell (heteroplasmy) can cause cell and organism dysfunction. Using a mathematical model, we test if selection against heteroplasmy can lead to the evolution of uniparental inheritance. When we assume selection against heteroplasmy and mutations are neither advantageous nor deleterious (neutral mutations), uniparental inheritance replaces biparental inheritance for all tested parameter values. When heteroplasmy involves mutations that are advantageous or deleterious (non-neutral mutations), uniparental inheritance can still replace biparental inheritance. We show that uniparental inheritance can evolve with or without pre-existing mating types. Finally, we show that selection against heteroplasmy can explain why some organisms deviate from strict uniparental inheritance. Thus, we suggest that selection against heteroplasmy explains the evolution of uniparental inheritance. PMID:25880558
NASA Technical Reports Server (NTRS)
Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc
2014-01-01
In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.
Response to Selection in Finite Locus Models with Nonadditive Effects.
Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jørn Rind; Bijma, Piter; Sørensen, Anders Christian
2017-05-01
Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive variance can be sustained or even increased when nonadditive genetic effects are present. We tested the hypothesis that finite-locus models with both additive and nonadditive genetic effects maintain more additive genetic variance (VA) and realize larger medium- to long-term genetic gains than models with only additive effects when the trait under selection is subject to truncation selection. Four genetic models that included additive, dominance, and additive-by-additive epistatic effects were simulated. The simulated genome for individuals consisted of 25 chromosomes, each with a length of 1 M. One hundred bi-allelic QTL, 4 on each chromosome, were considered. In each generation, 100 sires and 100 dams were mated, producing 5 progeny per mating. The population was selected for a single trait (h2 = 0.1) for 100 discrete generations with selection on phenotype or BLUP-EBV. VA decreased with directional truncation selection even in presence of nonadditive genetic effects. Nonadditive effects influenced long-term response to selection and among genetic models additive gene action had highest response to selection. In addition, in all genetic models, BLUP-EBV resulted in a greater fixation of favorable and unfavorable alleles and higher response than phenotypic selection. In conclusion, for the schemes we simulated, the presence of nonadditive genetic effects had little effect in changes of additive variance and VA decreased by directional selection. © The American Genetic Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Trends in the quality of water in New Jersey streams, water years 1971–2011
Hickman, R. Edward; Hirsch, Robert M.
2017-02-27
In a study conducted by the U.S. Geological Survey in cooperation with the New Jersey Department of Environmental Protection and the Delaware River Basin Commission, trend tests were conducted on selected water-quality characteristics measured at stations on streams in New Jersey during selected periods over water years 1971‒2011. Tests were conducted on 3 nutrients (total nitrogen, filtered nitrate plus nitrite, and total phosphorus) at 28 water-quality stations. At 4 of these stations, tests were also conducted on 3 measures of major ions (specific conductance, filtered chloride, and total dissolved solids).Two methods were used to identify trends—Weighted Regressions on Time, Discharge, and Season (WRTDS) models and seasonal rank-sum tests. For this report, the use of WRTDS models included the use of the WRTDS Bootstrap Test (WBT). WRTDS models identified trends in flow-normalized annual concentrations and flow-normalized annual fluxes over water years 1980‒2011 and 2000‒11 for each nutrient, filtered chloride, and total dissolved solids. WRTDS models were developed for each nutrient at the 20 or 21 stations at which streamflow was measured or estimated. Trends in nutrient concentration were reported for these stations; trends in nutrient fluxes were reported only for 15–17 of these stations.The results of WRTDS models for water years 1980‒2011 identified more stations with downward trends in concentrations of either total nitrogen or total phosphorus than upward trends. For total nitrogen, there were downward trends at 9 stations and an upward trend at 1 station. For total phosphorus, there were downward trends at 8 stations and an upward trend at 1 station. For filtered nitrate plus nitrite, there were downward trends at 6 stations and upward trends at 6 stations. The result of the trend test in flux for a selected nutrient at a selected station (downward trend, no trend, or upward trend) usually matched the trend result in concentration.Seasonal rank-sum tests, the second method used, identified step trends in water-quality measured in different decades—1970s, 1980s, 1990s, and 2000s. Tests were conducted on all nutrients at 28 stations and on all measures of major ions at the 4 selected stations. Results of seasonal rank-sum tests between the 1980s and the 2000s identified more stations with downward trends in concentrations of total nitrogen (14) than stations with upward trends (2) and more stations with downward trends in concentrations of total phosphorus (18) than stations with upward trends (1).A combined dataset of trend results for concentrations over water years 1980‒2011 was created from the results of the two tests for the period. Results of WRTDS models were included in this combined dataset, if available. Otherwise, the results of the seasonal rank-sum tests between water-quality characteristics measured in the 1980s and 2000s were included.Trend results over water years 1980‒2011 in the combined dataset show that few of the 28 stations had upward trends in concentrations of either total nitrogen or total phosphorus. There were only 2 stations with upward trends in total nitrogen concentration and 1 station with an upward trend in total phosphorus concentration. Results for filtered nitrate plus nitrite show about the same number of stations with upward trends (9) as stations with downward trends (7). Results for all measures of major ions show upward trends at the four stations tested.
ERIC Educational Resources Information Center
Blank, Rolf K.
2004-01-01
The purpose of the three-year CCSSO study was to design, implement, and test the effectiveness of the Data on Enacted Curriculum (DEC) model for improving math and science instruction. The model was tested by measuring its effects with a randomly selected sample of ?treatment? schools at the middle grades level as compared to a control group of…
Predicting space telerobotic operator training performance from human spatial ability assessment
NASA Astrophysics Data System (ADS)
Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan
2013-11-01
Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.
Using Dispersed Modes During Model Correlation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Hathcock, Megan L.
2017-01-01
The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.
Battery of behavioral tests in mice to study postoperative delirium
Peng, Mian; Zhang, Ce; Dong, Yuanlin; Zhang, Yiying; Nakazawa, Harumasa; Kaneki, Masao; Zheng, Hui; Shen, Yuan; Marcantonio, Edward R.; Xie, Zhongcong
2016-01-01
Postoperative delirium is associated with increased morbidity, mortality and cost. However, its neuropathogenesis remains largely unknown, partially owing to lack of animal model(s). We therefore set out to employ a battery of behavior tests, including natural and learned behavior, in mice to determine the effects of laparotomy under isoflurane anesthesia (Anesthesia/Surgery) on these behaviors. The mice were tested at 24 hours before and at 6, 9 and 24 hours after the Anesthesia/Surgery. Composite Z scores were calculated. Cyclosporine A, an inhibitor of mitochondria permeability transient pore, was used to determine potential mitochondria-associated mechanisms of these behavioral changes. Anesthesia/Surgery selectively impaired behaviors, including latency to eat food in buried food test, freezing time and time spent in the center in open field test, and entries and duration in the novel arm of Y maze test, with acute onset and various timecourse. The composite Z scores quantitatively demonstrated the Anesthesia/Surgery-induced behavior impairment in mice. Cyclosporine A selectively ameliorated the Anesthesia/Surgery-induced reduction in ATP levels, the increases in latency to eat food, and the decreases in entries in the novel arm. These findings suggest that we could use a battery of behavior tests to establish a mouse model to study postoperative delirium. PMID:27435513
Taper and volume equations for selected Appalachian hardwood species
A. Jeff Martin
1981-01-01
Coefficients for five taper/volume models are developed for 18 Appalachian hardwood species. Each model can be used to estimate diameter at any point on the bole, height to any preselected diameter, and cubic-foot volume between any two points on the bole. The resulting equations were tested on six sets of independent data and an evaluation of these tests is included,...
NASA Technical Reports Server (NTRS)
Griffin, S. A.; Madsen, A. P.; Mcclain, A. A.
1984-01-01
The feasibility of designing advanced technology, highly maneuverable, fighter aircraft models to achieve full scale Reynolds number in the National Transonic Facility (NTF) is examined. Each of the selected configurations are tested for aeroelastic effects through the use of force and pressure data. A review of materials and material processes is also included.
Predicting fatty acid profiles in blood based on food intake and the FADS1 rs174546 SNP.
Hallmann, Jacqueline; Kolossa, Silvia; Gedrich, Kurt; Celis-Morales, Carlos; Forster, Hannah; O'Donovan, Clare B; Woolhead, Clara; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Lambrinou, Christina-Paulina; Mavrogianni, Christina; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Godlewska, Magdalena; Surwiłło, Agnieszka; Mathers, John C; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Saris, Wim H M; Manios, Yannis; Martinez, Jose Alfredo; Traczyk, Iwona; Gibney, Michael J; Daniel, Hannelore
2015-12-01
A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. We used mathematical modeling to predict levels of PUFA in whole blood, based on multiple hypothesis testing and bootstrapped LASSO selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Among other food items, fish, pizza, chicken, and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26-43% of the variability in PUFA concentrations in the training set and 22-33% in the test set. Selecting food items using multiple hypothesis testing is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
VCE early acoustic test results of General Electric's high-radius ratio coannular plug nozzle
NASA Technical Reports Server (NTRS)
Knott, P. R.; Brausch, J. F.; Bhutiani, P. K.; Majjigi, R. K.; Doyle, V. L.
1980-01-01
Results of variable cycle engine (VCE) early acoustic engine and model scale tests are presented. A summary of an extensive series of far field acoustic, advanced acoustic, and exhaust plume velocity measurements with a laser velocimeter of inverted velocity and temperature profile, high radius ratio coannular plug nozzles on a YJ101 VCE static engine test vehicle are reviewed. Select model scale simulated flight acoustic measurements for an unsuppressed and a mechanical suppressed coannular plug nozzle are also discussed. The engine acoustic nozzle tests verify previous model scale noise reduction measurements. The engine measurements show 4 to 6 PNdB aft quadrant jet noise reduction and up to 7 PNdB forward quadrant shock noise reduction relative to a fully mixed conical nozzle at the same specific thrust and mixed pressure ratio. The influences of outer nozzle radius ratio, inner stream velocity ratio, and area ratio are discussed. Also, laser velocimeter measurements of mean velocity and turbulent velocity of the YJ101 engine are illustrated. Select model scale static and simulated flight acoustic measurements are shown which corroborate that coannular suppression is maintained in forward speed.
Effect of strain rate and temperature on mechanical properties of selected building Polish steels
NASA Astrophysics Data System (ADS)
Moćko, Wojciech; Kruszka, Leopold
2015-09-01
Currently, the computer programs of CAD type are basic tool for designing of various structures under impact loading. Application of the numerical calculations allows to substantially reduce amount of time required for the design stage of such projects. However, the proper use of computer aided designing technique requires input data for numerical software including elastic-plastic models of structural materials. This work deals with the constitutive model developed by Rusinek and Klepaczko (RK) applied for the modelling of mechanical behaviour of selected grades structural St0S, St3SX, 18GS and 34GS steels and presents here results of experimental and empirical analyses to describe dynamic elastic-plastic behaviours of tested materials at wide range of temperature. In order to calibrate the RK constitutive model, series of compression tests at wide range of strain rates, including static, quasi-static and dynamic investigations at lowered, room and elevated temperatures, were carried out using two testing stands: servo-hydraulic machine and split Hopkinson bar. The results were analysed to determine influence of temperature and strain rate on visco-plastic response of tested steels, and show good correlation with experimental data.
EMC system test performance on Spacelab
NASA Astrophysics Data System (ADS)
Schwan, F.
1982-07-01
Electromagnetic compatibility testing of the Spacelab engineering model is discussed. Documentation, test procedures (including data monitoring and test configuration set up) and performance assessment approach are described. Equipment was assembled into selected representative flight configurations. The physical and functional interfaces between the subsystems were demonstrated within the integration and test sequence which culminated in the flyable configuration Long Module plus one Pallet.
Durner, George M.; Amstrup, Steven C.; Nielson, Ryan M.; McDonald, Trent; Huzurbazar, Snehalata
2004-01-01
Polar bears (Ursus maritimus) depend on ice-covered seas to satisfy life history requirements. Modern threats to polar bears include oil spills in the marine environment and changes in ice composition resulting from climate change. Managers need practical models that explain the distribution of bears in order to assess the impacts of these threats. We explored the use of discrete choice models to describe habitat selection by female polar bears in the Beaufort Sea. Using stepwise procedures we generated resource selection models of habitat use. Sea ice characteristics and ocean depths at known polar bear locations were compared to the same features at randomly selected locations. Models generated for each of four seasons confirmed complexities of habitat use by polar bears and their response to numerous factors. Bears preferred shallow water areas where different ice types intersected. Variation among seasons was reflected mainly in differential selection of total ice concentration, ice stages, floe sizes, and their interactions. Distance to the nearest ice interface was a significant term in models for three seasons. Water depth was selected as a significant term in all seasons, possibly reflecting higher productivity in shallow water areas. Preliminary tests indicate seasonal models can predict polar bear distribution based on prior sea ice data.
Yang, Mingxing; Li, Xiumin; Li, Zhibin; Ou, Zhimin; Liu, Ming; Liu, Suhuan; Li, Xuejun; Yang, Shuyu
2013-01-01
DNA microarray analysis is characterized by obtaining a large number of gene variables from a small number of observations. Cluster analysis is widely used to analyze DNA microarray data to make classification and diagnosis of disease. Because there are so many irrelevant and insignificant genes in a dataset, a feature selection approach must be employed in data analysis. The performance of cluster analysis of this high-throughput data depends on whether the feature selection approach chooses the most relevant genes associated with disease classes. Here we proposed a new method using multiple Orthogonal Partial Least Squares-Discriminant Analysis (mOPLS-DA) models and S-plots to select the most relevant genes to conduct three-class disease classification and prediction. We tested our method using Golub's leukemia microarray data. For three classes with subtypes, we proposed hierarchical orthogonal partial least squares-discriminant analysis (OPLS-DA) models and S-plots to select features for two main classes and their subtypes. For three classes in parallel, we employed three OPLS-DA models and S-plots to choose marker genes for each class. The power of feature selection to classify and predict three-class disease was evaluated using cluster analysis. Further, the general performance of our method was tested using four public datasets and compared with those of four other feature selection methods. The results revealed that our method effectively selected the most relevant features for disease classification and prediction, and its performance was better than that of the other methods.
Physical employment standards for U.K. fire and rescue service personnel.
Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L
2016-01-01
Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ares I Scale Model Acoustic Tests Instrumentation for Acoustic and Pressure Measurements
NASA Technical Reports Server (NTRS)
Vargas, Magda B.; Counter, Douglas D.
2011-01-01
The Ares I Scale Model Acoustic Test (ASMAT) was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116. The test article included a 5% scale Ares I vehicle model and tower mounted on the Mobile Launcher. Acoustic and pressure data were measured by approximately 200 instruments located throughout the test article. There were four primary ASMAT instrument suites: ignition overpressure (IOP), lift-off acoustics (LOA), ground acoustics (GA), and spatial correlation (SC). Each instrumentation suite incorporated different sensor models which were selected based upon measurement requirements. These requirements included the type of measurement, exposure to the environment, instrumentation check-outs and data acquisition. The sensors were attached to the test article using different mounts and brackets dependent upon the location of the sensor. This presentation addresses the observed effect of the sensors and mounts on the acoustic and pressure measurements.
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
Vaugeois, J M; Costentin, J
1998-01-01
Antidepressants are used since 40 years. All presently used antidepressants have a slow onset of action and do not improve all patients; thus, there is an absolute need for new antidepressants. A variety of animal models, often based upon the monoaminergic theory of depressive disorders, has been used to screen the current antidepressants. In fact, the main focus of most of these animal models has been to predict the antidepressant potential i.e. to establish predictive validity. However, the evaluation of such animal models should also consider face validity, i.e. how closely the model resembles the human condition, and this should help to identify innovating medicines. Antidepressants, when taken by a healthy person, induce nothing more than side effects, unrelated to an action on mood, whereas they alleviate depressive symptomatology in depressed patients. We have speculated that genetically selected animal models would be closer to the human clinical situation than models based on standard laboratory strains. We have depicted here that marked differences exist between strains of mice in the amount of immobility i.e. "spontaneous helplessness" observed in the tail suspension test, a method used to screen potential antidepressants. We have studied the behavioural characteristics of mice selectively bred for spontaneous high or low immobility scores in the tail suspension test. Hopefully, these selectively bred lines will provide a novel approach to investigate behavioural, neurochemical and neuroendocrine correlates of antidepressant action.
Selective interference with image retention and generation: evidence for the workspace model.
van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio
2009-08-01
We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.
Chemical function based pharmacophore generation of endothelin-A selective receptor antagonists.
Funk, Oliver F; Kettmann, Viktor; Drimal, Jan; Langer, Thierry
2004-05-20
Both quantitative and qualitative chemical function based pharmacophore models of endothelin-A (ET(A)) selective receptor antagonists were generated by using the two algorithms HypoGen and HipHop, respectively, which are implemented in the Catalyst molecular modeling software. The input for HypoGen is a training set of 18 ET(A) antagonists exhibiting IC(50) values ranging between 0.19 nM and 67 microM. The best output hypothesis consists of five features: two hydrophobic (HY), one ring aromatic (RA), one hydrogen bond acceptor (HBA), and one negative ionizable (NI) function. The highest scoring Hip Hop model consists of six features: three hydrophobic (HY), one ring aromatic (RA), one hydrogen bond acceptor (HBA), and one negative ionizable (NI). It is the result of an input of three highly active, selective, and structurally diverse ET(A) antagonists. The predictive power of the quantitative model could be approved by using a test set of 30 compounds, whose activity values spread over 6 orders of magnitude. The two pharmacophores were tested according to their ability to extract known endothelin antagonists from the 3D molecular structure database of Derwent's World Drug Index. Thereby the main part of selective ET(A) antagonistic entries was detected by the two hypotheses. Furthermore, the pharmacophores were used to screen the Maybridge database. Six compounds were chosen from the output hit lists for in vitro testing of their ability to displace endothelin-1 from its receptor. Two of these are new potential lead compounds because they are structurally novel and exhibit satisfactory activity in the binding assay.
Ground Vibration Test Planning and Pre-Test Analysis for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Bedrossian, Herand; Tinker, Michael L.; Hidalgo, Homero
2000-01-01
This paper describes the results of the modal test planning and the pre-test analysis for the X-33 vehicle. The pre-test analysis included the selection of the target modes, selection of the sensor and shaker locations and the development of an accurate Test Analysis Model (TAM). For target mode selection, four techniques were considered, one based on the Modal Cost technique, one based on Balanced Singular Value technique, a technique known as the Root Sum Squared (RSS) method, and a Modal Kinetic Energy (MKE) approach. For selecting sensor locations, four techniques were also considered; one based on the Weighted Average Kinetic Energy (WAKE), one based on Guyan Reduction (GR), one emphasizing engineering judgment, and one based on an optimum sensor selection technique using Genetic Algorithm (GA) search technique combined with a criteria based on Hankel Singular Values (HSV's). For selecting shaker locations, four techniques were also considered; one based on the Weighted Average Driving Point Residue (WADPR), one based on engineering judgment and accessibility considerations, a frequency response method, and an optimum shaker location selection based on a GA search technique combined with a criteria based on HSV's. To evaluate the effectiveness of the proposed sensor and shaker locations for exciting the target modes, extensive numerical simulations were performed. Multivariate Mode Indicator Function (MMIF) was used to evaluate the effectiveness of each sensor & shaker set with respect to modal parameter identification. Several TAM reduction techniques were considered including, Guyan, IRS, Modal, and Hybrid. Based on a pre-test cross-orthogonality checks using various reduction techniques, a Hybrid TAM reduction technique was selected and was used for all three vehicle fuel level configurations.
Statistical validation of normal tissue complication probability models.
Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis
2012-09-01
To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Berry, R. L.; Tegart, J. R.; Demchak, L. J.
1979-01-01
Space shuttle propellant dynamics during ET/Orbiter separation in the RTLS (return to launch site) mission abort sequence were investigated in a test program conducted in the NASA KC-135 "Zero G" aircraft using a 1/10th-scale model of the ET LOX Tank. Low-g parabolas were flown from which thirty tests were selected for evaluation. Data on the nature of low-g propellant reorientation in the ET LOX tank, and measurements of the forces exerted on the tank by the moving propellent will provide a basis for correlation with an analytical model of the slosh phenomenon.
Test of a habitat suitability index for black bears in the southern Appalachians
Mitchell, M.S.; Zimmerman, J.W.; Powell, R.A.
2002-01-01
We present a habitat suitability index (HSI) model for black bears (Ursus americanus) living in the southern Appalachians that was developed a priori from the literature, then tested using location and home range data collected in the Pisgah Bear Sanctuary, North Carolina, over a 12-year period. The HSI was developed and initially tested using habitat and bear data collected over 2 years in the sanctuary. We increased number of habitat sampling sites, included data collected in areas affected by timber harvest, used more recent Geographic Information System (GIS) technology to create a more accurate depiction of the HSI for the sanctuary, evaluated effects of input variability on HSI values, and duplicated the original tests using more data. We found that the HSI predicted habitat selection by bears on population and individual levels and the distribution of collared bears were positively correlated with HSI values. We found a stronger relationship between habitat selection by bears and a second-generation HSI. We evaluated our model with criteria suggested by Roloff and Kernohan (1999) for evaluating HSI model reliability and concluded that our model was reliable and robust. The model's strength is that it was developed as an a priori hypothesis directly modeling the relationship between critical resources and fitness of bears and tested with independent data. We present the HSI spatially as a continuous fitness surface where potential contribution of habitat to the fitness of a bear is depicted at each point in space.
Experimental evolution of a sexually selected display in yeast
Rogers, David W.; Greig, Duncan
2008-01-01
The fundamental principle underlying sexual selection theory is that an allele conferring an advantage in the competition for mates will spread through a population. Remarkably, this has never been demonstrated empirically. We have developed an experimental system using yeast for testing genetic models of sexual selection. Yeast signal to potential partners by producing an attractive pheromone; stronger signallers are preferred as mates. We tested the effect of high and low levels of sexual selection on the evolution of a gene determining the strength of this signal. Under high sexual selection, an allele encoding a stronger signal was able to invade a population of weak signallers, and we observed a corresponding increase in the amount of pheromone produced. By contrast, the strong signalling allele failed to invade under low sexual selection. Our results demonstrate, for the first time, the spread of a sexually selected allele through a population, confirming the central assumption of sexual selection theory. Our yeast system is a powerful tool for investigating the genetics of sexual selection. PMID:18842545
Genetic signatures of natural selection in a model invasive ascidian
NASA Astrophysics Data System (ADS)
Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin
2017-03-01
Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.
Bayesian model selection applied to artificial neural networks used for water resources modeling
NASA Astrophysics Data System (ADS)
Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.
2008-04-01
Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.
The effect of interface properties on nickel base alloy composites
NASA Technical Reports Server (NTRS)
Groves, M.; Grossman, T.; Senemeier, M.; Wright, K.
1995-01-01
This program was performed to assess the extent to which mechanical behavior models can predict the properties of sapphire fiber/nickel aluminide matrix composites and help guide their development by defining improved combinations of matrix and interface coating. The program consisted of four tasks: 1) selection of the matrices and interface coating constituents using a modeling-based approach; 2) fabrication of the selected materials; 3) testing and evaluation of the materials; and 4) evaluation of the behavior models to develop recommendations. Ni-50Al and Ni-20AI-30Fe (a/o) matrices were selected which gave brittle and ductile behavior, respectively, and an interface coating of PVD YSZ was selected which provided strong bonding to the sapphire fiber. Significant fiber damage and strength loss was observed in the composites which made straightforward comparison of properties with models difficult. Nevertheless, the models selected generally provided property predictions which agreed well with results when fiber degradation was incorporated. The presence of a strong interface bond was felt to be detrimental in the NiAI MMC system where low toughness and low strength were observed.
Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting
Steele, Katie; Werndl, Charlotte
2018-01-01
Abstract This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction2 A Climate Case Study3 The Bayesian Method vis-à-vis Intuitions4 Classical Tests vis-à-vis Intuitions5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases6 Re-examining Our Case Study7 Conclusion PMID:29780170
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
Auditory dysfunction associated with solvent exposure
2013-01-01
Background A number of studies have demonstrated that solvents may induce auditory dysfunction. However, there is still little knowledge regarding the main signs and symptoms of solvent-induced hearing loss (SIHL). The aim of this research was to investigate the association between solvent exposure and adverse effects on peripheral and central auditory functioning with a comprehensive audiological test battery. Methods Seventy-two solvent-exposed workers and 72 non-exposed workers were selected to participate in the study. The test battery comprised pure-tone audiometry (PTA), transient evoked otoacoustic emissions (TEOAE), Random Gap Detection (RGD) and Hearing-in-Noise test (HINT). Results Solvent-exposed subjects presented with poorer mean test results than non-exposed subjects. A bivariate and multivariate linear regression model analysis was performed. One model for each auditory outcome (PTA, TEOAE, RGD and HINT) was independently constructed. For all of the models solvent exposure was significantly associated with the auditory outcome. Age also appeared significantly associated with some auditory outcomes. Conclusions This study provides further evidence of the possible adverse effect of solvents on the peripheral and central auditory functioning. A discussion of these effects and the utility of selected hearing tests to assess SIHL is addressed. PMID:23324255
Wang, Heng; Sang, Yuanjun
2017-10-01
The mechanical behavior modeling of human soft biological tissues is a key issue for a large number of medical applications, such as surgery simulation, surgery planning, diagnosis, etc. To develop a biomechanical model of human soft tissues under large deformation for surgery simulation, the adaptive quasi-linear viscoelastic (AQLV) model was proposed and applied in human forearm soft tissues by indentation tests. An incremental ramp-and-hold test was carried out to calibrate the model parameters. To verify the predictive ability of the AQLV model, the incremental ramp-and-hold test, a single large amplitude ramp-and-hold test and a sinusoidal cyclic test at large strain amplitude were adopted in this study. Results showed that the AQLV model could predict the test results under the three kinds of load conditions. It is concluded that the AQLV model is feasible to describe the nonlinear viscoelastic properties of in vivo soft tissues under large deformation. It is promising that this model can be selected as one of the soft tissues models in the software design for surgery simulation or diagnosis.
UXO Discrimination Study Former Spencer Artillery Range
2013-04-01
tested . This approach uses one of three models labeled: aggressive, intermediate and conservative. The choice of model depends on an a...anomalies will be selected for labeling using NMAL. The goal at this step is to maximize the information gain from new labels requested from the set of ...number of false alarms (nFA) is lower for the classifier where feature selection was used . 9 Complexity of a site is measured using an information
NASA Technical Reports Server (NTRS)
Spring, Samuel D.
2006-01-01
This report documents the results of an experimental program conducted on two advanced metallic alloy systems (Rene' 142 directionally solidified alloy (DS) and Rene' N6 single crystal alloy) and the characterization of two distinct internal state variable inelastic constitutive models. The long term objective of the study was to develop a computational life prediction methodology that can integrate the obtained material data. A specialized test matrix for characterizing advanced unified viscoplastic models was specified and conducted. This matrix included strain controlled tensile tests with intermittent relaxtion test with 2 hr hold times, constant stress creep tests, stepped creep tests, mixed creep and plasticity tests, cyclic temperature creep tests and tests in which temperature overloads were present to simulate actual operation conditions for validation of the models. The selected internal state variable models where shown to be capable of representing the material behavior exhibited by the experimental results; however the program ended prior to final validation of the models.
A powerful and robust test in genetic association studies.
Cheng, Kuang-Fu; Lee, Jen-Yu
2014-01-01
There are several well-known single SNP tests presented in the literature for detecting gene-disease association signals. Having in place an efficient and robust testing process across all genetic models would allow a more comprehensive approach to analysis. Although some studies have shown that it is possible to construct such a test when the variants are common and the genetic model satisfies certain conditions, the model conditions are too restrictive and in general difficult to verify. In this paper, we propose a powerful and robust test without assuming any model restrictions. Our test is based on the selected 2 × 2 tables derived from the usual 2 × 3 table. By signals from these tables, we show through simulations across a wide range of allele frequencies and genetic models that this approach may produce a test which is almost uniformly most powerful in the analysis of low- and high-frequency variants. Two cancer studies are used to demonstrate applications of the proposed test. © 2014 S. Karger AG, Basel.
Energetics and dynamics of simple impulsive solar flares
NASA Technical Reports Server (NTRS)
Starr, R.; Heindl, W. A.; Crannell, C. J.; Thomas, R. J.; Batchelor, D. A.; Magun, A.
1987-01-01
Flare energetics and dynamics were studied using observations of simple impulsive spike bursts. A large, homogeneous set of events was selected to enable the most definite tests possible of competing flare models, in the absence of spatially resolved observations. The emission mechanisms and specific flare models that were considered in this investigation are described, and the derivations of the parameters that were tested are presented. Results of the correlation analysis between soft and hard X-ray energetics are also presented. The ion conduction front model and tests of that model with the well-observed spike bursts are described. Finally, conclusions drawn from this investigation and suggestions for future studies are discussed.
Hubben, Gijs; Bootsma, Martin; Luteijn, Michiel; Glynn, Diarmuid; Bishai, David
2011-01-01
Background Screening at hospital admission for carriage of methicillin-resistant Staphylococcus aureus (MRSA) has been proposed as a strategy to reduce nosocomial infections. The objective of this study was to determine the long-term costs and health benefits of selective and universal screening for MRSA at hospital admission, using both PCR-based and chromogenic media-based tests in various settings. Methodology/Principal Findings A simulation model of MRSA transmission was used to determine costs and effects over 15 years from a US healthcare perspective. We compared admission screening together with isolation of identified carriers against a baseline policy without screening or isolation. Strategies included selective screening of high risk patients or universal admission screening, with PCR-based or chromogenic media-based tests, in medium (5%) or high nosocomial prevalence (15%) settings. The costs of screening and isolation per averted MRSA infection were lowest using selective chromogenic-based screening in high and medium prevalence settings, at $4,100 and $10,300, respectively. Replacing the chromogenic-based test with a PCR-based test costs $13,000 and $36,200 per additional infection averted, and subsequent extension to universal screening with PCR would cost $131,000 and $232,700 per additional infection averted, in high and medium prevalence settings respectively. Assuming $17,645 benefit per infection averted, the most cost-saving strategies in high and medium prevalence settings were selective screening with PCR and selective screening with chromogenic, respectively. Conclusions/Significance Admission screening costs $4,100–$21,200 per infection averted, depending on strategy and setting. Including financial benefits from averted infections, screening could well be cost saving. PMID:21483492
Rodgers, K E; Schwartz, H E; Roda, N; Thornton, M; Kobak, W; diZerega, G S
2000-04-01
To assess the efficacy of Oxiplex (FzioMed, Inc., San Luis Obispo, CA) barriers. Film of polyethylene oxide and carboxymethylcellulose (Oxiplex) were tested for strength and tissue adherence. Films were selected for evaluation in models for biocompatability and adherence. Three films were selected for evaluation in efficacy studies, and one was evaluated for effects on bacterial peritonitis. Handling characteristics of Oxiplex film were evaluated via laparoscopy. University laboratory. Rabbits, rats, pigs. Placement of Oxiplex prototypes at the site of injury. Mechanical properties, biocompatibility, tissue adherence, adhesion development, infection potentiation, and device handling. Mechanical tests indicated that tensile strength and elongation were inversely correlated. All films tested had excellent tissue adherence properties. Selected films, based on residence time and biocompatibility, prevented adhesion formation in all animals and were highly efficacious in preventing adhesion reformation. The optimal Oxiplex prototype prevented adhesion reformation in 91% of the animals. This Oxiplex film, dyed to allow visualization, prevented adhesion reformation and did not affect bacterial peritonitis. In a laparoscopic model, the Oxiplex film, delivered in FilmSert forceps, via a 5.0-mm trocar, rapidly unfurled and could be easily applied to tissue with strong adherence. These data show development of an adhesion prevention material that is tissue adherent, can be placed via laparoscopy, and does not affect host resistance.
Population Pharmacokinetics of Intranasal Scopolamine
NASA Technical Reports Server (NTRS)
Wu, L.; Chow, D. S. L.; Putcha, L.
2013-01-01
Introduction: An intranasal gel dosage formulation of scopolamine (INSCOP) was developed for the treatment of Space Motion Sickness (SMS).The bioavailability and pharmacokinetics (PK) was evaluated using data collected in Phase II IND protocols. We reported earlier statistically significant gender differences in PK parameters of INSCOP at a dose level of 0.4 mg. To identify covariates that influence PK parameters of INSCOP, we examined population covariates of INSCOP PK model for 0.4 mg dose. Methods: Plasma scopolamine concentrations versus time data were collected from 20 normal healthy human subjects (11 male/9 female) after a 0.4 mg dose. Phoenix NLME was employed for PK analysis of these data using gender, body weight and age as covariates for model selection. Model selection was based on a likelihood ratio test on the difference of criteria (-2LL). Statistical significance for base model building and individual covariate analysis was set at P less than 0.05{delta(-2LL)=3.84}. Results: A one-compartment pharmacokinetic model with first-order elimination best described INSCOP concentration ]time profiles. Inclusion of gender, body weight and age as covariates individually significantly reduced -2LL by the cut-off value of 3.84(P less than 0.05) when tested against the base model. After the forward stepwise selection and backward elimination steps, gender was selected to add to the final model which had significant influence on absorption rate constant (ka) and the volume of distribution (V) of INSCOP. Conclusion: A population pharmacokinetic model for INSCOP has been identified and gender was a significant contributing covariate for the final model. The volume of distribution and Ka were significantly higher in males than in females which confirm gender-dependent pharmacokinetics of scopolamine after administration of a 0.4 mg dose.
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
Fully Bayesian tests of neutrality using genealogical summary statistics.
Drummond, Alexei J; Suchard, Marc A
2008-10-31
Many data summary statistics have been developed to detect departures from neutral expectations of evolutionary models. However questions about the neutrality of the evolution of genetic loci within natural populations remain difficult to assess. One critical cause of this difficulty is that most methods for testing neutrality make simplifying assumptions simultaneously about the mutational model and the population size model. Consequentially, rejecting the null hypothesis of neutrality under these methods could result from violations of either or both assumptions, making interpretation troublesome. Here we harness posterior predictive simulation to exploit summary statistics of both the data and model parameters to test the goodness-of-fit of standard models of evolution. We apply the method to test the selective neutrality of molecular evolution in non-recombining gene genealogies and we demonstrate the utility of our method on four real data sets, identifying significant departures of neutrality in human influenza A virus, even after controlling for variation in population size. Importantly, by employing a full model-based Bayesian analysis, our method separates the effects of demography from the effects of selection. The method also allows multiple summary statistics to be used in concert, thus potentially increasing sensitivity. Furthermore, our method remains useful in situations where analytical expectations and variances of summary statistics are not available. This aspect has great potential for the analysis of temporally spaced data, an expanding area previously ignored for limited availability of theory and methods.
NASA Astrophysics Data System (ADS)
Šafka, J.; Ackermann, M.; Voleský, L.
2016-04-01
This paper deals with establishing of building parameters for 1.2344 (H13) tool steel processed using Selective Laser Melting (SLM) technology with layer thickness of 50 µm. In the first part of the work, testing matrix of models in the form of a cube with chamfered edge were built under various building parameters such as laser scanning speed and laser power. Resulting models were subjected to set of tests including measurement of surface roughness, inspection of inner structure with aid of Light Optical Microscopy and Scanning Electron Microscopy and evaluation of micro-hardness. These tests helped us to evaluate an influence of changes in building strategy to the properties of the resulting model. In the second part of the work, mechanical properties of the H13 steel were examined. For this purpose, the set of samples in the form of “dog bone” were printed under three different alignments towards the building plate and tested on universal testing machine. Mechanical testing of the samples should then reveal if the different orientation and thus different layering of the material somehow influence its mechanical properties. For this type of material, the producer provides the parameters for layer thickness of 30 µm only. Thus, our 50 µm building strategy brings shortening of the building time which is valuable especially for large models. Results of mechanical tests show slight variation in mechanical properties for various alignment of the sample.
TESTS OF INDOOR AIR QUALITY SINKS
Experiments were conducted in a room-size test chamber to determine the sink effects of selected materials on indoor air concentrations of p-dichlorobenzene (PDCB). hese effects might alter pollutant behavior from that predicted using similar indoor air quality models, by reducin...
Pulse Jet Mixing Tests With Noncohesive Solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Perry A.; Bamberger, Judith A.; Enderlin, Carl W.
2012-02-17
This report summarizes results from pulse jet mixing (PJM) tests with noncohesive solids in Newtonian liquid. The tests were conducted during FY 2007 and 2008 to support the design of mixing systems for the Hanford Waste Treatment and Immobilization Plant (WTP). Tests were conducted at three geometric scales using noncohesive simulants, and the test data were used to develop models predicting two measures of mixing performance for full-scale WTP vessels. The models predict the cloud height (the height to which solids will be lifted by the PJM action) and the critical suspension velocity (the minimum velocity needed to ensure allmore » solids are suspended off the floor, though not fully mixed). From the cloud height, the concentration of solids at the pump inlet can be estimated. The predicted critical suspension velocity for lifting all solids is not precisely the same as the mixing requirement for 'disturbing' a sufficient volume of solids, but the values will be similar and closely related. These predictive models were successfully benchmarked against larger scale tests and compared well with results from computational fluid dynamics simulations. The application of the models to assess mixing in WTP vessels is illustrated in examples for 13 distinct designs and selected operational conditions. The values selected for these examples are not final; thus, the estimates of performance should not be interpreted as final conclusions of design adequacy or inadequacy. However, this work does reveal that several vessels may require adjustments to design, operating features, or waste feed properties to ensure confidence in operation. The models described in this report will prove to be valuable engineering tools to evaluate options as designs are finalized for the WTP. Revision 1 refines data sets used for model development and summarizes models developed since the completion of Revision 0.« less
A linear model fails to predict orientation selectivity of cells in the cat visual cortex.
Volgushev, M; Vidyasagar, T R; Pei, X
1996-01-01
1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828
Optimal Decision Stimuli for Risky Choice Experiments: An Adaptive Approach
Cavagnaro, Daniel R.; Gonzalez, Richard; Myung, Jay I.; Pitt, Mark A.
2014-01-01
Collecting data to discriminate between models of risky choice requires careful selection of decision stimuli. Models of decision making aim to predict decisions across a wide range of possible stimuli, but practical limitations force experimenters to select only a handful of them for actual testing. Some stimuli are more diagnostic between models than others, so the choice of stimuli is critical. This paper provides the theoretical background and a methodological framework for adaptive selection of optimal stimuli for discriminating among models of risky choice. The approach, called Adaptive Design Optimization (ADO), adapts the stimulus in each experimental trial based on the results of the preceding trials. We demonstrate the validity of the approach with simulation studies aiming to discriminate Expected Utility, Weighted Expected Utility, Original Prospect Theory, and Cumulative Prospect Theory models. PMID:24532856
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Yarlett, Nigel; Waters, W. Ray; Harp, James A.; Wannemuehler, Michael J.; Morada, Mary; Bellcastro, Josephine; Upton, Steve J.; Marton, Laurence J.; Frydman, Benjamin J.
2007-01-01
The in vivo effectiveness of a series of conformationally restricted polyamine analogues alone and selected members in combination with dl-α-difluoromethylarginine against Cryptosporidium parvum infection in a T-cell receptor alpha-deficient mouse model was tested. Polyamine analogues were selected from the extended bis(ethyl)-sym-homospermidine or bis(ethyl)-spermine backbone having cis or trans double bonds at the center of the molecule. The cis isomers were found to have significantly greater efficacy in both preventing and curing infection in a mouse model than the trans polyamine analogues when tested in a T-cell receptor alpha-deficient mouse model. When tested in combination with dl-α-difluoromethylarginine, the cis-restricted analogues were found to be more effective in preventing oocyst shedding. This study demonstrates the potential of polyamine analogues as anticryptosporidial agents and highlights the presence of multiple points in polyamine synthesis by this parasite that are susceptible to inhibition resulting in growth inhibition. PMID:17242149
Strategies to intervene on causal systems are adaptively selected.
Coenen, Anna; Rehder, Bob; Gureckis, Todd M
2015-06-01
How do people choose interventions to learn about causal systems? Here, we considered two possibilities. First, we test an information sampling model, information gain, which values interventions that can discriminate between a learner's hypotheses (i.e. possible causal structures). We compare this discriminatory model to a positive testing strategy that instead aims to confirm individual hypotheses. Experiment 1 shows that individual behavior is described best by a mixture of these two alternatives. In Experiment 2 we find that people are able to adaptively alter their behavior and adopt the discriminatory model more often after experiencing that the confirmatory strategy leads to a subjective performance decrement. In Experiment 3, time pressure leads to the opposite effect of inducing a change towards the simpler positive testing strategy. These findings suggest that there is no single strategy that describes how intervention decisions are made. Instead, people select strategies in an adaptive fashion that trades off their expected performance and cognitive effort. Copyright © 2015 Elsevier Inc. All rights reserved.
Benkert, Pascal; Schwede, Torsten; Tosatto, Silvio Ce
2009-05-20
The selection of the most accurate protein model from a set of alternatives is a crucial step in protein structure prediction both in template-based and ab initio approaches. Scoring functions have been developed which can either return a quality estimate for a single model or derive a score from the information contained in the ensemble of models for a given sequence. Local structural features occurring more frequently in the ensemble have a greater probability of being correct. Within the context of the CASP experiment, these so called consensus methods have been shown to perform considerably better in selecting good candidate models, but tend to fail if the best models are far from the dominant structural cluster. In this paper we show that model selection can be improved if both approaches are combined by pre-filtering the models used during the calculation of the structural consensus. Our recently published QMEAN composite scoring function has been improved by including an all-atom interaction potential term. The preliminary model ranking based on the new QMEAN score is used to select a subset of reliable models against which the structural consensus score is calculated. This scoring function called QMEANclust achieves a correlation coefficient of predicted quality score and GDT_TS of 0.9 averaged over the 98 CASP7 targets and perform significantly better in selecting good models from the ensemble of server models than any other groups participating in the quality estimation category of CASP7. Both scoring functions are also benchmarked on the MOULDER test set consisting of 20 target proteins each with 300 alternatives models generated by MODELLER. QMEAN outperforms all other tested scoring functions operating on individual models, while the consensus method QMEANclust only works properly on decoy sets containing a certain fraction of near-native conformations. We also present a local version of QMEAN for the per-residue estimation of model quality (QMEANlocal) and compare it to a new local consensus-based approach. Improved model selection is obtained by using a composite scoring function operating on single models in order to enrich higher quality models which are subsequently used to calculate the structural consensus. The performance of consensus-based methods such as QMEANclust highly depends on the composition and quality of the model ensemble to be analysed. Therefore, performance estimates for consensus methods based on large meta-datasets (e.g. CASP) might overrate their applicability in more realistic modelling situations with smaller sets of models based on individual methods.
Garg, Neeraj; Li, Yi-Lin; Garcia Collazo, Ana Maria; Litten, Chris; Ryono, Denis E; Zhang, Minsheng; Caringal, Yolanda; Brigance, Robert P; Meng, Wei; Washburn, William N; Agback, Peter; Mellström, Karin; Rehnmark, Stefan; Rahimi-Ghadim, Mahmoud; Norin, Thomas; Grynfarb, Marlena; Sandberg, Johnny; Grover, Gary; Malm, Johan
2007-08-01
Based on the scaffold of the pharmacologically selective thyromimetic 2b, structurally a close analog to KB-141 (2a), a number of novel N-acylated-alpha-amino acid derivatives were synthesized and tested in a TR radioligand binding assay as well as in a reporter cell assay. On the basis of TRbeta(1)-isoform selectivity and affinity, as well as affinity to the reporter cell assay, 3d was selected for further studies in the cholesterol-fed rat model. In this model 3d revealed an improved therapeutic window between cholesterol and TSH lowering but decreased margins versus tachycardia compared with 2a.
NASA Astrophysics Data System (ADS)
Urry, C. Megan
1997-01-01
This grant was awarded to Dr. C. Megan Urry of the Space Telescope Science Institute in response to two successful ADP proposals to use archival Ginga and Rosat X-ray data for 'Testing the Pairs-Reflection model with X-Ray Spectral Variability' (in collaboration with Paola Grandi, now at the University of Rome) and 'X-Ray Properties of Complete Samples of Radio-Selected BL Lacertae Objects' (in collaboration with then-graduate student Rita Sambruna, now a post-doc at Goddard Space Flight Center). In addition, post-docs Joseph Pesce and Elena Pian, and graduate student Matthew O'Dowd, have worked on several aspects of these projects. The grant was originally awarded on 3/01/94; this report covers the full period, through May 1997. We have completed our project on the X-ray properties of radio-selected BL Lacs.
Wall and corner fire tests on selected wood products
H. C. Tran; M. L. Janssens
1991-01-01
As part of a fire growth program to develop and validate a compartment fire model, several bench-scale and full-scale tests were conducted. This paper reports the full-scale wall and corner test results of step 2 of this study. A room fire test following the ASTM proposed standard specifications was used for these full-scale tests. In step 1, we investigated the...
Performance mapping of a 30 cm engineering model thruster
NASA Technical Reports Server (NTRS)
Poeschel, R. L.; Vahrenkamp, R. P.
1975-01-01
A 30 cm thruster representative of the engineering model design has been tested over a wide range of operating parameters to document performance characteristics such as electrical and propellant efficiencies, double ion and beam divergence thrust loss, component equilibrium temperatures, operational stability, etc. Data obtained show that optimum power throttling, in terms of maximum thruster efficiency, is not highly sensitive to parameter selection. Consequently, considerations of stability, discharge chamber erosion, thrust losses, etc. can be made the determining factors for parameter selection in power throttling operations. Options in parameter selection based on these considerations are discussed.
Naval Aerospace Medical Research Laboratory. 1993 Command History.
1994-04-01
selected student naval aviators score differentially on the test battery and are their scores correlated with flight school performance? 58...Ph.D., attended 3rd Meeting of Accelerated Research Initiative, Nenral Constraints on Cognitive Architecture, Learning Research and Development...Shamma, S.E. and Stanny, R.R,, "Models of Cognitive Performance Assessment Tests," Mathematical Modeling and Scientific Compuiing, Vol. 2, pp. 240-245
Support interference of wind tunnel models: A selective annotated bibliography
NASA Technical Reports Server (NTRS)
Tuttle, M. H.; Gloss, B. B.
1981-01-01
This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.
Support interference of wind tunnel models: A selective annotated bibliography
NASA Technical Reports Server (NTRS)
Tuttle, M. H.; Lawing, P. L.
1984-01-01
This bibliography, with abstracts, consists of 143 citations arranged in chronological order by dates of publication. Selection of the citations was made for their relevance to the problems involved in understanding or avoiding support interference in wind tunnel testing throughout the Mach number range. An author index is included.
High Stakes Tests with Self-Selected Essay Questions: Addressing Issues of Fairness
ERIC Educational Resources Information Center
Lamprianou, Iasonas
2008-01-01
This study investigates the effect of reporting the unadjusted raw scores in a high-stakes language exam when raters differ significantly in severity and self-selected questions differ significantly in difficulty. More sophisticated models, introducing meaningful facets and parameters, are successively used to investigate the characteristics of…
Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard; de Sousa, Gracinda
2015-02-01
Blood establishments routinely perform screening immunoassays to assess safety of the blood components. As with any other screening test, results have an inherent uncertainty. In blood establishments the major concern is the chance of false negatives, due to its possible impact on patients' health. This article briefly reviews GUM and diagnostic accuracy models for screening immunoassays, recommending a scheme to support the screening laboratories' staffs on the selection of a model considering the intended use of the screening results (i.e., post-transfusion safety). The discussion is grounded on a "risk-based thinking", risk being considered from the blood donor selection to the screening immunoassays. A combination of GUM and diagnostic accuracy models to evaluate measurement uncertainty in blood establishments is recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.
Stember, Joseph N; Deng, Fang-Ming; Taneja, Samir S; Rosenkrantz, Andrew B
2014-08-01
To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input. © 2013 Wiley Periodicals, Inc.
A Comparison of Three Polytomous Item Response Theory Models in the Context of Testlet Scoring.
ERIC Educational Resources Information Center
Cook, Karon F.; Dodd, Barbara G.; Fitzpatrick, Steven J.
1999-01-01
The partial-credit model, the generalized partial-credit model, and the graded-response model were compared in the context of testlet scoring using Scholastic Assessment Tests results (n=2,548) and a simulated data set. Results favor the partial-credit model in this context; considerations for model selection in other contexts are discussed. (SLD)
NASA Astrophysics Data System (ADS)
Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong
2018-01-01
Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins. PMID:27418910
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Christidi, Foteini; Zalonis, Ioannis; Smyrnis, Nikolaos; Evdokimidis, Ioannis
2012-09-01
The present study investigates selective attention and verbal free recall in amyotrophic lateral sclerosis (ALS) and examines the contribution of selective attention, encoding, consolidation, and retrieval memory processes to patients' verbal free recall. We examined 22 non-demented patients with sporadic ALS and 22 demographically related controls using Stroop Neuropsychological Screening Test (SNST; selective attention) and Rey Auditory Verbal Learning Test (RAVLT; immediate & delayed verbal free recall). The item-specific deficit approach (ISDA) was applied to RAVLT to evaluate encoding, consolidation, and retrieval difficulties. ALS patients performed worse than controls on SNST (p < .001) and RAVLT immediate and delayed recall (p < .001) and showed deficient encoding (p = .001) and consolidation (p = .002) but not retrieval (p = .405). Hierarchical regression analysis revealed that SNST and ISDA indices accounted for: (a) 91.1% of the variance in RAVLT immediate recall, with encoding (p = .016), consolidation (p < .001), and retrieval (p = .032) significantly contributing to the overall model and the SNST alone accounting for 41.6%; and (b) 85.2% of the variance in RAVLT delayed recall, with consolidation (p < .001) and retrieval (p = .008) significantly contributing to the overall model and the SNST alone accounting for 39.8%. Thus, selective attention, encoding, and consolidation, and to a lesser extent of retrieval, influenced both immediate and delayed verbal free recall. Concluding, selective attention and the memory processes of encoding, consolidation, and retrieval should be considered while interpreting patients' impaired free recall. (JINS, 2012, 18, 1-10).
Mechanisms and genetic control of interspecific crossing barriers in Lycopersicon. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutschler, M.A.
Deficiency of Lycopersicon esculentum allele (E) was observed from the RFLP and isozyme data of the F{sub 2} populations derived from the cross L. esculentum x L. pennellii. The genome composition of the F{sub 2} populations containing L. pennellii cytoplasm (F{sub 2}{sup Lp4}) has a lower proportion of the homozygous L. pennellii (PP) genotypes and a higher proportion of heterozygote (EP) genotypes than that of the F{sub 2} populations containing L. esculentum cytoplasm (F{sub 2}{sup Le}). A lower proportion of the L. pennellii alleles (P) was also observed in F{sub 2}{sup Lp4} as compared to F{sub 2}{sup Le} when eachmore » marker locus was tested individually. To study the effects of gametic and zygotic selection on segregation distortion, the expected patterns of segregation at a marker locus were derived for ten selection models with gametic or zygotic selection at a hidden linked locus. Segregation distortion caused by four of the selection models studied can be uniquely identified by the patterns of significance expected for the likelihood ratio tests at the marker loci. Comparison of the chromosomal regions associated with specific selection models across populations (of this experiment and previous publications) indicated that the segregation distortion observed in chromosome 10 is associated with zygotic selection affecting both arms of the chromosome, and cytoplasm substitution has the effect of decreasing the segregation distortion on the long arm of the chromosome.« less
Fan, X-J; Wan, X-B; Huang, Y; Cai, H-M; Fu, X-H; Yang, Z-L; Chen, D-K; Song, S-X; Wu, P-H; Liu, Q; Wang, L; Wang, J-P
2012-01-01
Background: Current imaging modalities are inadequate in preoperatively predicting regional lymph node metastasis (RLNM) status in rectal cancer (RC). Here, we designed support vector machine (SVM) model to address this issue by integrating epithelial–mesenchymal-transition (EMT)-related biomarkers along with clinicopathological variables. Methods: Using tissue microarrays and immunohistochemistry, the EMT-related biomarkers expression was measured in 193 RC patients. Of which, 74 patients were assigned to the training set to select the robust variables for designing SVM model. The SVM model predictive value was validated in the testing set (119 patients). Results: In training set, eight variables, including six EMT-related biomarkers and two clinicopathological variables, were selected to devise SVM model. In testing set, we identified 63 patients with high risk to RLNM and 56 patients with low risk. The sensitivity, specificity and overall accuracy of SVM in predicting RLNM were 68.3%, 81.1% and 72.3%, respectively. Importantly, multivariate logistic regression analysis showed that SVM model was indeed an independent predictor of RLNM status (odds ratio, 11.536; 95% confidence interval, 4.113–32.361; P<0.0001). Conclusion: Our SVM-based model displayed moderately strong predictive power in defining the RLNM status in RC patients, providing an important approach to select RLNM high-risk subgroup for neoadjuvant chemoradiotherapy. PMID:22538975
Does Rational Selection of Training and Test Sets Improve the Outcome of QSAR Modeling?
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external dataset, the best way to validate the predictive ability of a model is to perform its s...
A new fit-for-purpose model testing framework: Decision Crash Tests
NASA Astrophysics Data System (ADS)
Tolson, Bryan; Craig, James
2016-04-01
Decision-makers in water resources are often burdened with selecting appropriate multi-million dollar strategies to mitigate the impacts of climate or land use change. Unfortunately, the suitability of existing hydrologic simulation models to accurately inform decision-making is in doubt because the testing procedures used to evaluate model utility (i.e., model validation) are insufficient. For example, many authors have identified that a good standard framework for model testing called the Klemes Crash Tests (KCTs), which are the classic model validation procedures from Klemeš (1986) that Andréassian et al. (2009) rename as KCTs, have yet to become common practice in hydrology. Furthermore, Andréassian et al. (2009) claim that the progression of hydrological science requires widespread use of KCT and the development of new crash tests. Existing simulation (not forecasting) model testing procedures such as KCTs look backwards (checking for consistency between simulations and past observations) rather than forwards (explicitly assessing if the model is likely to support future decisions). We propose a fundamentally different, forward-looking, decision-oriented hydrologic model testing framework based upon the concept of fit-for-purpose model testing that we call Decision Crash Tests or DCTs. Key DCT elements are i) the model purpose (i.e., decision the model is meant to support) must be identified so that model outputs can be mapped to management decisions ii) the framework evaluates not just the selected hydrologic model but the entire suite of model-building decisions associated with model discretization, calibration etc. The framework is constructed to directly and quantitatively evaluate model suitability. The DCT framework is applied to a model building case study on the Grand River in Ontario, Canada. A hypothetical binary decision scenario is analysed (upgrade or not upgrade the existing flood control structure) under two different sets of model building decisions. In one case, we show the set of model building decisions has a low probability to correctly support the upgrade decision. In the other case, we show evidence suggesting another set of model building decisions has a high probability to correctly support the decision. The proposed DCT framework focuses on what model users typically care about: the management decision in question. The DCT framework will often be very strict and will produce easy to interpret results enabling clear unsuitability determinations. In the past, hydrologic modelling progress has necessarily meant new models and model building methods. Continued progress in hydrologic modelling requires finding clear evidence to motivate researchers to disregard unproductive models and methods and the DCT framework is built to produce this kind of evidence. References: Andréassian, V., C. Perrin, L. Berthet, N. Le Moine, J. Lerat, C. Loumagne, L. Oudin, T. Mathevet, M.-H. Ramos, and A. Valéry (2009), Crash tests for a standardized evaluation of hydrological models. Hydrology and Earth System Sciences, 13, 1757-1764. Klemeš, V. (1986), Operational testing of hydrological simulation models. Hydrological Sciences Journal, 31 (1), 13-24.
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
Wang, Yonghua; Li, Yan; Wang, Bin
2007-01-01
Nicotine and a variety of other drugs and toxins are metabolized by cytochrome P450 (CYP) 2A6. The aim of the present study was to build a quantitative structure-activity relationship (QSAR) model to predict the activities of nicotine analogues on CYP2A6. Kernel partial least squares (K-PLS) regression was employed with the electro-topological descriptors to build the computational models. Both the internal and external predictabilities of the models were evaluated with test sets to ensure their validity and reliability. As a comparison to K-PLS, a standard PLS algorithm was also applied on the same training and test sets. Our results show that the K-PLS produced reasonable results that outperformed the PLS model on the datasets. The obtained K-PLS model will be helpful for the design of novel nicotine-like selective CYP2A6 inhibitors.
Yield Model Development (YMD) implementation plan for fiscal years 1981 and 1982
NASA Technical Reports Server (NTRS)
Ambroziak, R. A. (Principal Investigator)
1981-01-01
A plan is described for supporting USDA crop production forecasting and estimation by (1) testing, evaluating, and selecting crop yield models for application testing; (2) identifying areas of feasible research for improvement of models; and (3) conducting research to modify existing models and to develop new crop yield assessment methods. Tasks to be performed for each of these efforts are described as well as for project management and support. The responsibilities of USDA, USDC, USDI, and NASA are delineated as well as problem areas to be addressed.
Selectivity Mechanism of ATP-Competitive Inhibitors for PKB and PKA.
Wu, Ke; Pang, Jingzhi; Song, Dong; Zhu, Ying; Wu, Congwen; Shao, Tianqu; Chen, Haifeng
2015-07-01
Protein kinase B (PKB) acts as a central node on the PI3K kinase pathway. Constitutive activation and overexpression of PKB have been identified to involve in various cancers. However, protein kinase A (PKA) sharing high homology with PKB is essential for metabolic regulation. Therefore, specific targeting on PKB is crucial strategy in drug design and development for antitumor. Here, we had revealed the selectivity mechanism for PKB inhibitors with molecular dynamics simulation and 3D-QSAR methods. Selective inhibitors of PKB could form more hydrogen bonds and hydrophobic contacts with PKB than those with PKA. This could explain that selective inhibitor M128 is more potent to PKB than to PKA. Then, 3D-QSAR models were constructed for these selective inhibitors and evaluated by test set compounds. 3D-QSAR model comparison of PKB inhibitors and PKA inhibitors reveals possible methods to improve the selectivity of inhibitors. These models can be used to design new chemical entities and make quantitative prediction of the specific selective inhibitors before resorting to in vitro and in vivo experiment. © 2014 John Wiley & Sons A/S.
A system dynamics approach to analyze laboratory test errors.
Guo, Shijing; Roudsari, Abdul; Garcez, Artur d'Avila
2015-01-01
Although many researches have been carried out to analyze laboratory test errors during the last decade, it still lacks a systemic view of study, especially to trace errors during test process and evaluate potential interventions. This study implements system dynamics modeling into laboratory errors to trace the laboratory error flows and to simulate the system behaviors while changing internal variable values. The change of the variables may reflect a change in demand or a proposed intervention. A review of literature on laboratory test errors was given and provided as the main data source for the system dynamics model. Three "what if" scenarios were selected for testing the model. System behaviors were observed and compared under different scenarios over a period of time. The results suggest system dynamics modeling has potential effectiveness of helping to understand laboratory errors, observe model behaviours, and provide a risk-free simulation experiments for possible strategies.
Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements
NASA Technical Reports Server (NTRS)
Vargas, Magda B.; Counter, Douglas
2011-01-01
Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements
NASA Astrophysics Data System (ADS)
Kut, Stanislaw; Ryzinska, Grazyna; Niedzialek, Bernadetta
2016-01-01
The article presents the results of tests in order to verifying the effectiveness of the nine selected elastomeric material models (Neo-Hookean, Mooney with two and three constants, Signorini, Yeoh, Ogden, Arruda-Boyce, Gent and Marlow), which the material constants were determined in one material test - the uniaxial tension testing. The convergence assessment of nine analyzed models were made on the basis of their performance from an experimental bending test of the elastomer samples from the results of numerical calculations FEM for each material models. To calculate the material constants for the analyzed materials, a model has been generated by the stressstrain characteristics created as a result of experimental uniaxial tensile test with elastomeric dumbbell samples, taking into account the parameters received in its 18th cycle. Using such a calculated material constants numerical simulation of the bending process of a elastomeric, parallelepipedic sampleswere carried out using MARC / Mentat program.
Improving the baking quality of bread wheat by genomic selection in early generations.
Michel, Sebastian; Kummer, Christian; Gallee, Martin; Hellinger, Jakob; Ametz, Christian; Akgöl, Batuhan; Epure, Doru; Güngör, Huseyin; Löschenberger, Franziska; Buerstmayr, Hermann
2018-02-01
Genomic selection shows great promise for pre-selecting lines with superior bread baking quality in early generations, 3 years ahead of labour-intensive, time-consuming, and costly quality analysis. The genetic improvement of baking quality is one of the grand challenges in wheat breeding as the assessment of the associated traits often involves time-consuming, labour-intensive, and costly testing forcing breeders to postpone sophisticated quality tests to the very last phases of variety development. The prospect of genomic selection for complex traits like grain yield has been shown in numerous studies, and might thus be also an interesting method to select for baking quality traits. Hence, we focused in this study on the accuracy of genomic selection for laborious and expensive to phenotype quality traits as well as its selection response in comparison with phenotypic selection. More than 400 genotyped wheat lines were, therefore, phenotyped for protein content, dough viscoelastic and mixing properties related to baking quality in multi-environment trials 2009-2016. The average prediction accuracy across three independent validation populations was r = 0.39 and could be increased to r = 0.47 by modelling major QTL as fixed effects as well as employing multi-trait prediction models, which resulted in an acceptable prediction accuracy for all dough rheological traits (r = 0.38-0.63). Genomic selection can furthermore be applied 2-3 years earlier than direct phenotypic selection, and the estimated selection response was nearly twice as high in comparison with indirect selection by protein content for baking quality related traits. This considerable advantage of genomic selection could accordingly support breeders in their selection decisions and aid in efficiently combining superior baking quality with grain yield in newly developed wheat varieties.
Robertson, Sam; Woods, Carl; Gastin, Paul
2015-09-01
To develop a physiological performance and anthropometric attribute model to predict Australian Football League draft selection. Cross-sectional observational. Data was obtained (n=4902) from three Under-18 Australian football competitions between 2010 and 2013. Players were allocated into one of the three groups, based on their highest level of selection in their final year of junior football (Australian Football League Drafted, n=292; National Championship, n=293; State-level club, n=4317). Physiological performance (vertical jumps, agility, speed and running endurance) and anthropometric (body mass and height) data were obtained. Hedge's effect sizes were calculated to assess the influence of selection-level and competition on these physical attributes, with logistic regression models constructed to discriminate Australian Football League Drafted and National Championship players. Rule induction analysis was undertaken to determine a set of rules for discriminating selection-level. Effect size comparisons revealed a range of small to moderate differences between State-level club players and both other groups for all attributes, with trivial to small differences between Australian Football League Drafted and National Championship players noted. Logistic regression models showed multistage fitness test, height and 20 m sprint time as the most important attributes in predicting Draft success. Rule induction analysis showed that players displaying multistage fitness test scores of >14.01 and/or 20 m sprint times of <2.99 s were most likely to be recruited. High levels of performance in aerobic and/or speed tests increase the likelihood of elite junior Australian football players being recruited to the highest level of the sport. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Patt, Frederick S.; Hoisington, Charles M.; Gregg, Watson W.; Coronado, Patrick L.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Indest, A. W. (Editor)
1993-01-01
An analysis of orbit propagation models was performed by the Mission Operations element of the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Project, which has overall responsibility for the instrument scheduling. The orbit propagators selected for this analysis are widely available general perturbations models. The analysis includes both absolute accuracy determination and comparisons of different versions of the models. The results show that all of the models tested meet accuracy requirements for scheduling and data acquisition purposes. For internal Project use the SGP4 propagator, developed by the North American Air Defense (NORAD) Command, has been selected. This model includes atmospheric drag effects and, therefore, provides better accuracy. For High Resolution Picture Transmission (HRPT) ground stations, which have less stringent accuracy requirements, the publicly available Brouwer-Lyddane models are recommended. The SeaWiFS Project will make available portable source code for a version of this model developed by the Data Capture Facility (DCF).
NASA Astrophysics Data System (ADS)
Min, Qing-xu; Zhu, Jun-zhen; Feng, Fu-zhou; Xu, Chao; Sun, Ji-wei
2017-06-01
In this paper, the lock-in vibrothermography (LVT) is utilized for defect detection. Specifically, for a metal plate with an artificial fatigue crack, the temperature rise of the defective area is used for analyzing the influence of different test conditions, i.e. engagement force, excitation intensity, and modulated frequency. The multivariate nonlinear and logistic regression models are employed to estimate the POD (probability of detection) and POA (probability of alarm) of fatigue crack, respectively. The resulting optimal selection of test conditions is presented. The study aims to provide an optimized selection method of the test conditions in the vibrothermography system with the enhanced detection ability.
The NASA Langley building solar project and the supporting Lewis solar technology program
NASA Technical Reports Server (NTRS)
Ragsdale, R. G.; Namkoong, D.
1974-01-01
A solar energy technology program is described that includes solar collector testing in an indoor solar simulator facility and in an outdoor test facility, property measurements of solar panel coatings, and operation of a laboratory-scale solar model system test facility. Early results from simulator tests indicate that non-selective coatings behave more nearly in accord with predicted performance than do selective coatings. Initial experiments on the decay rate of thermally stratified hot water in a storage tank have been run. Results suggest that where high temperature water is required, excess solar energy collected by a building solar system should be stored overnight in the form of chilled water rather than hot water.
SUPERCRITICAL WATER OXIDATION MODEL DEVELOPMENT FOR SELECTED EPA PRIORITY POLLUTANTS
Supercritical Water Oxidation (SCWO) evaluated for five compounds: acetic acid, 2,4-dichlorophenol, pentachlorophenol, pyridine, 2,4-dichlorophenoxyacetic acid (methyl ester). inetic models were developed for acetic acid, 2,4-dichlorophenol, and pyridine. he test compounds were e...
NASA Technical Reports Server (NTRS)
Schlundt, D. W.
1976-01-01
The installed performance degradation of a swivel nozzle thrust deflector system obtained during increased vectoring angles of a large-scale test program was investigated and improved. Small-scale models were used to generate performance data for analyzing selected swivel nozzle configurations. A single-swivel nozzle design model with five different nozzle configurations and a twin-swivel nozzle design model, scaled to 0.15 size of the large-scale test hardware, were statically tested at low exhaust pressure ratios of 1.4, 1.3, 1.2, and 1.1 and vectored at four nozzle positions from 0 deg cruise through 90 deg vertical used for the VTOL mode.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
An Adaptive Genetic Association Test Using Double Kernel Machines.
Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis
2015-10-01
Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.
Frequency Spectrum Neutrality Tests: One for All and All for One
Achaz, Guillaume
2009-01-01
Neutrality tests based on the frequency spectrum (e.g., Tajima's D or Fu and Li's F) are commonly used by population geneticists as routine tests to assess the goodness-of-fit of the standard neutral model on their data sets. Here, I show that these neutrality tests are specific instances of a general model that encompasses them all. I illustrate how this general framework can be taken advantage of to devise new more powerful tests that better detect deviations from the standard model. Finally, I exemplify the usefulness of the framework on SNP data by showing how it supports the selection hypothesis in the lactase human gene by overcoming the ascertainment bias. The framework presented here paves the way for constructing novel tests optimized for specific violations of the standard model that ultimately will help to unravel scenarios of evolution. PMID:19546320
AVN-492, A Novel Highly Selective 5-HT6R Antagonist: Preclinical Evaluation.
Ivachtchenko, Alexandre V; Okun, Ilya; Aladinskiy, Vladimir; Ivanenkov, Yan; Koryakova, Angela; Karapetyan, Ruben; Mitkin, Oleg; Salimov, Ramiz; Ivashchenko, Andrey
2017-01-01
Discovery of 5-HT6 receptor subtype and its exclusive localization within the central nervous system led to extensive investigations of its role in Alzheimer's disease, schizophrenia, and obesity. In the present study, we present preclinical evaluation of a novel highly-potent and highly-selective 5-HT6R antagonist, AVN-492. The affinity of AVN-492 to bind to 5-HT6R (Ki = 91 pM) was more than three orders of magnitude higher than that to bind to the only other target, 5-HT2BR, (Ki = 170 nM). Thus, the compound displayed great 5-HT6R selectivity against all other serotonin receptor subtypes, and is extremely specific against any other receptors such as adrenergic, GABAergic, dopaminergic, histaminergic, etc. AVN-492 demonstrates good in vitro and in vivo ADME profile with high oral bioavailability and good brain permeability in rodents. In behavioral tests, AVN-492 shows anxiolytic effect in elevated plus-maze model, prevents an apomorphine-induced disruption of startle pre-pulse inhibition (the PPI model) and reverses a scopolamine- and MK-801-induced memory deficit in passive avoidance model. No anti-obesity effect of AVN-492 was found in a murine model. The data presented here strongly indicate that due to its high oral bioavailability, extremely high selectivity, and potency to block the 5-HT6 receptor, AVN-492 is a very promising tool for evaluating the role the 5-HT6 receptor might play in cognitive and neurodegenerative impairments. AVN-492 is an excellent drug candidate to be tested for treatment of such diseases, and is currently being tested in Phase I trials.
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
Kawaura, Kazuaki; Karasawa, Jun-ichi; Chaki, Shigeyuki; Hikichi, Hirohiko
2014-08-15
A 5-trial inhibitory avoidance test using spontaneously hypertensive rat (SHR) pups has been used as an animal model of attention deficit hyperactivity disorder (ADHD). However, the roles of noradrenergic systems, which are involved in the pathophysiology of ADHD, have not been investigated in this model. In the present study, the effects of adrenergic α2 receptor stimulation, which has been an effective treatment for ADHD, on attention/cognition performance were investigated in this model. Moreover, neuronal mechanisms mediated through adrenergic α2 receptors were investigated. We evaluated the effects of both clonidine, a non-selective adrenergic α2 receptor agonist, and guanfacine, a selective adrenergic α2A receptor agonist, using a 5-trial inhibitory avoidance test with SHR pups. Juvenile SHR exhibited a shorter transfer latency, compared with juvenile Wistar Kyoto (WKY) rats. Both clonidine and guanfacine significantly prolonged the transfer latency of juvenile SHR. The effects of clonidine and guanfacine were significantly blocked by pretreatment with an adrenergic α2A receptor antagonist. In contrast, the effect of clonidine was not attenuated by pretreatment with an adrenergic α2B receptor antagonist, or an adrenergic α2C receptor antagonist, while it was attenuated by a non-selective adrenergic α2 receptor antagonist. Furthermore, the effects of neither clonidine nor guanfacine were blocked by pretreatment with a selective noradrenergic neurotoxin. These results suggest that the stimulation of the adrenergic α2A receptor improves the attention/cognition performance of juvenile SHR in the 5-trial inhibitory avoidance test and that postsynaptic, rather than presynaptic, adrenergic α2A receptor is involved in this effect. Copyright © 2014 Elsevier B.V. All rights reserved.
Jack, Corin; Hotchkiss, Emily; Sargison, Neil D; Toma, Luiza; Milne, Catherine; Bartley, David J
2017-04-01
Nematode control in sheep, by strategic use of anthelmintics, is threatened by the emergence of roundworms populations that are resistant to one or more of the currently available drugs. In response to growing concerns of Anthelmintic Resistance (AR) development in UK sheep flocks, the Sustainable Control of Parasites in Sheep (SCOPS) initiative was set up in 2003 in order to promote practical guidelines for producers and advisors. To facilitate the uptake of 'best practice' approaches to nematode management, a comprehensive understanding of the various factors influencing sheep farmers' adoption of the SCOPS principles is required. A telephone survey of 400 Scottish sheep farmers was conducted to elicit attitudes regarding roundworm control, AR and 'best practice' recommendations. A quantitative statistical analysis approach using structural equation modelling was chosen to test the relationships between both observed and latent variables relating to general roundworm control beliefs. A model framework was developed to test the influence of socio-psychological factors on the uptake of sustainable (SCOPS) and known unsustainable (AR selective) roundworm control practices. The analysis identified eleven factors with significant influences on the adoption of SCOPS recommended practices and AR selective practices. Two models established a good fit with the observed data with each model explaining 54% and 47% of the variance in SCOPS and AR selective behaviours, respectively. The key influences toward the adoption of best practice parasite management, as well as demonstrating negative influences on employing AR selective practices were farmer's base line understanding about roundworm control and confirmation about lack of anthelmintic efficacy in a flock. The findings suggest that improving farmers' acceptance and uptake of diagnostic testing and improving underlying knowledge and awareness about nematode control may influence adoption of best practice behaviour. Copyright © 2017 Elsevier B.V. All rights reserved.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Huang, Xiaoqiang; Han, Kehang; Zhu, Yushan
2013-01-01
A systematic optimization model for binding sequence selection in computational enzyme design was developed based on the transition state theory of enzyme catalysis and graph-theoretical modeling. The saddle point on the free energy surface of the reaction system was represented by catalytic geometrical constraints, and the binding energy between the active site and transition state was minimized to reduce the activation energy barrier. The resulting hyperscale combinatorial optimization problem was tackled using a novel heuristic global optimization algorithm, which was inspired and tested by the protein core sequence selection problem. The sequence recapitulation tests on native active sites for two enzyme catalyzed hydrolytic reactions were applied to evaluate the predictive power of the design methodology. The results of the calculation show that most of the native binding sites can be successfully identified if the catalytic geometrical constraints and the structural motifs of the substrate are taken into account. Reliably predicting active site sequences may have significant implications for the creation of novel enzymes that are capable of catalyzing targeted chemical reactions. PMID:23649589
Adaptive transmission disequilibrium test for family trio design.
Yuan, Min; Tian, Xin; Zheng, Gang; Yang, Yaning
2009-01-01
The transmission disequilibrium test (TDT) is a standard method to detect association using family trio design. It is optimal for an additive genetic model. Other TDT-type tests optimal for recessive and dominant models have also been developed. Association tests using family data, including the TDT-type statistics, have been unified to a class of more comprehensive and flexable family-based association tests (FBAT). TDT-type tests have high efficiency when the genetic model is known or correctly specified, but may lose power if the model is mis-specified. Hence tests that are robust to genetic model mis-specification yet efficient are preferred. Constrained likelihood ratio test (CLRT) and MAX-type test have been shown to be efficiency robust. In this paper we propose a new efficiency robust procedure, referred to as adaptive TDT (aTDT). It uses the Hardy-Weinberg disequilibrium coefficient to identify the potential genetic model underlying the data and then applies the TDT-type test (or FBAT for general applications) corresponding to the selected model. Simulation demonstrates that aTDT is efficiency robust to model mis-specifications and generally outperforms the MAX test and CLRT in terms of power. We also show that aTDT has power close to, but much more robust, than the optimal TDT-type test based on a single genetic model. Applications to real and simulated data from Genetic Analysis Workshop (GAW) illustrate the use of our adaptive TDT.
ERIC Educational Resources Information Center
Reckase, Mark D.
Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…
ERIC Educational Resources Information Center
Hoover, H. D.; Plake, Barbara
The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…
Mansourian, Robert; Mutch, David M; Antille, Nicolas; Aubert, Jerome; Fogel, Paul; Le Goff, Jean-Marc; Moulin, Julie; Petrov, Anton; Rytz, Andreas; Voegel, Johannes J; Roberts, Matthew-Alan
2004-11-01
Microarray technology has become a powerful research tool in many fields of study; however, the cost of microarrays often results in the use of a low number of replicates (k). Under circumstances where k is low, it becomes difficult to perform standard statistical tests to extract the most biologically significant experimental results. Other more advanced statistical tests have been developed; however, their use and interpretation often remain difficult to implement in routine biological research. The present work outlines a method that achieves sufficient statistical power for selecting differentially expressed genes under conditions of low k, while remaining as an intuitive and computationally efficient procedure. The present study describes a Global Error Assessment (GEA) methodology to select differentially expressed genes in microarray datasets, and was developed using an in vitro experiment that compared control and interferon-gamma treated skin cells. In this experiment, up to nine replicates were used to confidently estimate error, thereby enabling methods of different statistical power to be compared. Gene expression results of a similar absolute expression are binned, so as to enable a highly accurate local estimate of the mean squared error within conditions. The model then relates variability of gene expression in each bin to absolute expression levels and uses this in a test derived from the classical ANOVA. The GEA selection method is compared with both the classical and permutational ANOVA tests, and demonstrates an increased stability, robustness and confidence in gene selection. A subset of the selected genes were validated by real-time reverse transcription-polymerase chain reaction (RT-PCR). All these results suggest that GEA methodology is (i) suitable for selection of differentially expressed genes in microarray data, (ii) intuitive and computationally efficient and (iii) especially advantageous under conditions of low k. The GEA code for R software is freely available upon request to authors.
Implementation of Structured Inquiry Based Model Learning toward Students' Understanding of Geometry
ERIC Educational Resources Information Center
Salim, Kalbin; Tiawa, Dayang Hjh
2015-01-01
The purpose of this study is implementation of a structured inquiry learning model in instruction of geometry. The model used is a model with a quasi-experimental study amounted to two classes of samples selected from the population of the ten classes with cluster random sampling technique. Data collection tool consists of a test item…
Three probes for diagnosing photochemical dynamics are presented and applied to specialized ambient surface-level observations and to a numerical photochemical model to better understand rates of production and other process information in the atmosphere and in the model. Howeve...
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chen-Wei; Wu, Shiu-Lien
2013-01-01
The random-threshold generalized unfolding model (RTGUM) was developed by treating the thresholds in the generalized unfolding model as random effects rather than fixed effects to account for the subjective nature of the selection of categories in Likert items. The parameters of the new model can be estimated with the JAGS (Just Another Gibbs…
ERIC Educational Resources Information Center
Wang, Chun
2013-01-01
Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to combine the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models aim at classifying examinees into the correct mastery profile group so as to pinpoint the strengths and weakness of each examinee whereas CAT algorithms choose items to determine those…
Using Fit Indexes to Select a Covariance Model for Longitudinal Data
ERIC Educational Resources Information Center
Liu, Siwei; Rovine, Michael J.; Molenaar, Peter C. M.
2012-01-01
This study investigated the performance of fit indexes in selecting a covariance structure for longitudinal data. Data were simulated to follow a compound symmetry, first-order autoregressive, first-order moving average, or random-coefficients covariance structure. We examined the ability of the likelihood ratio test (LRT), root mean square error…
Developing Instructional Applications at the Secondary Level. The Computer as a Tool.
ERIC Educational Resources Information Center
McManus, Jack; And Others
Case studies are presented for seven Los Angeles area (California) high schools that worked with Pepperdine University in the IBM/ETS (International Business Machines/Educational Testing Service) Model Schools program, a project which provided training for selected secondary school teachers in the use of personal computers and selected software as…
A Model for Predicting Learning Flow and Achievement in Corporate e-Learning
ERIC Educational Resources Information Center
Joo, Young Ju; Lim, Kyu Yon; Kim, Su Mi
2012-01-01
The primary objective of this study was to investigate the determinants of learning flow and achievement in corporate online training. Self-efficacy, intrinsic value, and test anxiety were selected as learners' motivational factors, while perceived usefulness and ease of use were also selected as learning environmental factors. Learning flow was…
Sex Role Learning: A Test of the Selective Attention Hypothesis.
ERIC Educational Resources Information Center
Bryan, Janice Westlund; Luria, Zella
This paper reports three studies designed to determine whether children show selective attention and/or differential memory to slide pictures of same-sex vs. opposite-sex models and activities. Attention was measured using a feedback EEG procedure, which measured the presence or absence of alpha rhythms in the subjects' brains during presentation…
Selective Attentional Effects of Textbook Study Questions on Student Learning in Science.
ERIC Educational Resources Information Center
Holliday, William G.
1981-01-01
Reports results of a study testing a selective attentional model which predicted that textbook study questions adjunct to a flow diagram will focus students' attention more upon questioned information and less upon nonquestioned information. A picture-word diagram describing biogeochemical cycles to high school biology students (N=176) was used.…
Brankov, Jovan G
2013-10-21
The channelized Hotelling observer (CHO) has become a widely used approach for evaluating medical image quality, acting as a surrogate for human observers in early-stage research on assessment and optimization of imaging devices and algorithms. The CHO is typically used to measure lesion detectability. Its popularity stems from experiments showing that the CHO's detection performance can correlate well with that of human observers. In some cases, CHO performance overestimates human performance; to counteract this effect, an internal-noise model is introduced, which allows the CHO to be tuned to match human-observer performance. Typically, this tuning is achieved using example data obtained from human observers. We argue that this internal-noise tuning step is essentially a model training exercise; therefore, just as in supervised learning, it is essential to test the CHO with an internal-noise model on a set of data that is distinct from that used to tune (train) the model. Furthermore, we argue that, if the CHO is to provide useful insights about new imaging algorithms or devices, the test data should reflect such potential differences from the training data; it is not sufficient simply to use new noise realizations of the same imaging method. Motivated by these considerations, the novelty of this paper is the use of new model selection criteria to evaluate ten established internal-noise models, utilizing four different channel models, in a train-test approach. Though not the focus of the paper, a new internal-noise model is also proposed that outperformed the ten established models in the cases tested. The results, using cardiac perfusion SPECT data, show that the proposed train-test approach is necessary, as judged by the newly proposed model selection criteria, to avoid spurious conclusions. The results also demonstrate that, in some models, the optimal internal-noise parameter is very sensitive to the choice of training data; therefore, these models are prone to overfitting, and will not likely generalize well to new data. In addition, we present an alternative interpretation of the CHO as a penalized linear regression wherein the penalization term is defined by the internal-noise model.
The evolution of sexes: A specific test of the disruptive selection theory.
da Silva, Jack
2018-01-01
The disruptive selection theory of the evolution of anisogamy posits that the evolution of a larger body or greater organismal complexity selects for a larger zygote, which in turn selects for larger gametes. This may provide the opportunity for one mating type to produce more numerous, small gametes, forcing the other mating type to produce fewer, large gametes. Predictions common to this and related theories have been partially upheld. Here, a prediction specific to the disruptive selection theory is derived from a previously published game-theoretic model that represents the most complete description of the theory. The prediction, that the ratio of macrogamete to microgamete size should be above three for anisogamous species, is supported for the volvocine algae. A fully population genetic implementation of the model, involving mutation, genetic drift, and selection, is used to verify the game-theoretic approach and accurately simulates the evolution of gamete sizes in anisogamous species. This model was extended to include a locus for gamete motility and shows that oogamy should evolve whenever there is costly motility. The classic twofold cost of sex may be derived from the fitness functions of these models, showing that this cost is ultimately due to genetic conflict.
Genetic signatures of natural selection in a model invasive ascidian
Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin
2017-01-01
Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta. PMID:28266616
Prediction of biodegradability of aromatics in water using QSAR modeling.
Cvetnic, Matija; Juretic Perisic, Daria; Kovacic, Marin; Kusic, Hrvoje; Dermadi, Jasna; Horvat, Sanja; Bolanca, Tomislav; Marin, Vedrana; Karamanis, Panaghiotis; Loncaric Bozic, Ana
2017-05-01
The study was aimed at developing models for predicting the biodegradability of aromatic water pollutants. For that purpose, 36 single-benzene ring compounds, with different type, number and position of substituents, were used. The biodegradability was estimated according to the ratio of the biochemical (BOD 5 ) and chemical (COD) oxygen demand values determined for parent compounds ((BOD 5 /COD) 0 ), as well as for their reaction mixtures in half-life achieved by UV-C/H 2 O 2 process ((BOD 5 /COD) t1/2 ). The models correlating biodegradability and molecular structure characteristics of studied pollutants were derived using quantitative structure-activity relationship (QSAR) principles and tools. Upon derivation of the models and calibration on the training and subsequent testing on the test set, 3- and 5-variable models were selected as the most predictive for (BOD 5 /COD) 0 and (BOD 5 /COD) t1/2 , respectively, according to the values of statistical parameters R 2 and Q 2 . Hence, 3-variable model predicting (BOD 5 /COD) 0 possessed R 2 =0.863 and Q 2 =0.799 for training set, and R 2 =0.710 for test set, while 5-variable model predicting (BOD 5 /COD) 1/2 possessed R 2 =0.886 and Q 2 =0.788 for training set, and R 2 =0.564 for test set. The selected models are interpretable and transparent, reflecting key structural features that influence targeted biodegradability and can be correlated with the degradation mechanisms of studied compounds by UV-C/H 2 O 2 . Copyright © 2017 Elsevier Inc. All rights reserved.
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW
2007-01-01
Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912
Research on Correlation between Vehicle Cycle and Engine Cycle in Heavy-duty commercial vehicle
NASA Astrophysics Data System (ADS)
lin, Chen; Zhong, Wang; Shuai, Liu
2017-12-01
In order to study the correlation between vehicle cycle and engine cycle in heavy commercial vehicles, the conversion model of vehicle cycle to engine cycle is constructed based on the vehicle power system theory and shift strategy, which considers the verification on diesel truck. The results show that the model has high rationality and reliability in engine operation. In the acceleration process of high speed, the difference of model gear selection leads to the actual deviation. Compared with the drum test, the engine speed distribution obtained by the model deviates to right, which fits to the lower grade. The grade selection has high influence on the model.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz
2015-10-06
In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.
Watts, Sarah E; Weems, Carl F
2006-12-01
The purpose of this study was to examine the linkages among selective attention, memory bias, cognitive errors, and anxiety problems by testing a model of the interrelations among these cognitive variables and childhood anxiety disorder symptoms. A community sample of 81 youth (38 females and 43 males) aged 9-17 years and their parents completed measures of the child's anxiety disorder symptoms. Youth completed assessments measuring selective attention, memory bias, and cognitive errors. Results indicated that selective attention, memory bias, and cognitive errors were each correlated with childhood anxiety problems and provide support for a cognitive model of anxiety which posits that these three biases are associated with childhood anxiety problems. Only limited support for significant interrelations among selective attention, memory bias, and cognitive errors was found. Finally, results point towards an effective strategy for moving the assessment of selective attention to younger and community samples of youth.
Development and test of selected model pedestrian safety regulations
DOT National Transportation Integrated Search
1981-04-01
Two model regulations to remove parking--one from suburban streets in daylight hours and one on the last 50 feet of the approach to crosswalks--were designed in previous work to prevent pedestrian dart and dash accidents by removing screening vehicle...
A global logrank test for adaptive treatment strategies based on observational studies.
Li, Zhiguo; Valenstein, Marcia; Pfeiffer, Paul; Ganoczy, Dara
2014-02-28
In studying adaptive treatment strategies, a natural question that is of paramount interest is whether there is any significant difference among all possible treatment strategies. When the outcome variable of interest is time-to-event, we propose an inverse probability weighted logrank test for testing the equivalence of a fixed set of pre-specified adaptive treatment strategies based on data from an observational study. The weights take into account both the possible selection bias in an observational study and the fact that the same subject may be consistent with more than one treatment strategy. The asymptotic distribution of the weighted logrank statistic under the null hypothesis is obtained. We show that, in an observational study where the treatment selection probabilities need to be estimated, the estimation of these probabilities does not have an effect on the asymptotic distribution of the weighted logrank statistic, as long as the estimation of the parameters in the models for these probabilities is n-consistent. Finite sample performance of the test is assessed via a simulation study. We also show in the simulation that the test can be pretty robust to misspecification of the models for the probabilities of treatment selection. The method is applied to analyze data on antidepressant adherence time from an observational database maintained at the Department of Veterans Affairs' Serious Mental Illness Treatment Research and Evaluation Center. Copyright © 2013 John Wiley & Sons, Ltd.
Review of GEM Radiation Belt Dropout and Buildup Challenges
NASA Astrophysics Data System (ADS)
Tu, Weichao; Li, Wen; Morley, Steve; Albert, Jay
2017-04-01
In Summer 2015 the US NSF GEM (Geospace Environment Modeling) focus group named "Quantitative Assessment of Radiation Belt Modeling" started the "RB dropout" and "RB buildup" challenges, focused on quantitative modeling of the radiation belt buildups and dropouts. This is a community effort which includes selecting challenge events, gathering model inputs that are required to model the radiation belt dynamics during these events (e.g., various magnetospheric waves, plasmapause and density models, electron phase space density data), simulating the challenge events using different types of radiation belt models, and validating the model results by comparison to in situ observations of radiation belt electrons (from Van Allen Probes, THEMIS, GOES, LANL/GEO, etc). The goal is to quantitatively assess the relative importance of various acceleration, transport, and loss processes in the observed radiation belt dropouts and buildups. Since 2015, the community has selected four "challenge" events under four different categories: "storm-time enhancements", "non-storm enhancements", "storm-time dropouts", and "non-storm dropouts". Model inputs and data for each selected event have been coordinated and shared within the community to establish a common basis for simulations and testing. Modelers within and outside US with different types of radiation belt models (diffusion-type, diffusion-convection-type, test particle codes, etc.) have participated in our challenge and shared their simulation results and comparison with spacecraft measurements. Significant progress has been made in quantitative modeling of the radiation belt buildups and dropouts as well as accessing the modeling with new measures of model performance. In this presentation, I will review the activities from our "RB dropout" and "RB buildup" challenges and the progresses achieved in understanding radiation belt physics and improving model validation and verification.
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Status of DSMT research program
NASA Technical Reports Server (NTRS)
Mcgowan, Paul E.; Javeed, Mehzad; Edighoffer, Harold H.
1991-01-01
The status of the Dynamic Scale Model Technology (DSMT) research program is presented. DSMT is developing scale model technology for large space structures as part of the Control Structure Interaction (CSI) program at NASA Langley Research Center (LaRC). Under DSMT a hybrid-scale structural dynamics model of Space Station Freedom was developed. Space Station Freedom was selected as the focus structure for DSMT since the station represents the first opportunity to obtain flight data on a complex, three-dimensional space structure. Included is an overview of DSMT including the development of the space station scale model and the resulting hardware. Scaling technology was developed for this model to achieve a ground test article which existing test facilities can accommodate while employing realistically scaled hardware. The model was designed and fabricated by the Lockheed Missile and Space Co., and is assembled at LaRc for dynamic testing. Also, results from ground tests and analyses of the various model components are presented along with plans for future subassembly and matted model tests. Finally, utilization of the scale model for enhancing analysis verification of the full-scale space station is also considered.
Animal models for testing anti-prion drugs.
Fernández-Borges, Natalia; Elezgarai, Saioa R; Eraña, Hasier; Castilla, Joaquín
2013-01-01
Prion diseases belong to a group of fatal infectious diseases with no effective therapies available. Throughout the last 35 years, less than 50 different drugs have been tested in different experimental animal models without hopeful results. An important limitation when searching for new drugs is the existence of appropriate models of the disease. The three different possible origins of prion diseases require the existence of different animal models for testing anti-prion compounds. Wild type, over-expressing transgenic mice and other more sophisticated animal models have been used to evaluate a diversity of compounds which some of them were previously tested in different in vitro experimental models. The complexity of prion diseases will require more pre-screening studies, reliable sporadic (or spontaneous) animal models and accurate chemical modifications of the selected compounds before having an effective therapy against human prion diseases. This review is intended to put on display the more relevant animal models that have been used in the search of new antiprion therapies and describe some possible procedures when handling chemical compounds presumed to have anti-prion activity prior to testing them in animal models.
Tumor morphology and phenotypic evolution driven by selective pressure from the microenvironment.
Anderson, Alexander R A; Weaver, Alissa M; Cummings, Peter T; Quaranta, Vito
2006-12-01
Emergence of invasive behavior in cancer is life-threatening, yet ill-defined due to its multifactorial nature. We present a multiscale mathematical model of cancer invasion, which considers cellular and microenvironmental factors simultaneously and interactively. Unexpectedly, the model simulations predict that harsh tumor microenvironment conditions (e.g., hypoxia, heterogenous extracellular matrix) exert a dramatic selective force on the tumor, which grows as an invasive mass with fingering margins, dominated by a few clones with aggressive traits. In contrast, mild microenvironment conditions (e.g., normoxia, homogeneous matrix) allow clones with similar aggressive traits to coexist with less aggressive phenotypes in a heterogeneous tumor mass with smooth, noninvasive margins. Thus, the genetic make-up of a cancer cell may realize its invasive potential through a clonal evolution process driven by definable microenvironmental selective forces. Our mathematical model provides a theoretical/experimental framework to quantitatively characterize this selective pressure for invasion and test ways to eliminate it.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Gray, Stephen J; Gallo, David A
2015-01-01
People can use a content-specific recapitulation strategy to trigger memories (i.e., mentally reinstating encoding conditions), but how people deploy this strategy is unclear. Is recapitulation naturally used to guide all recollection attempts, or is it only used selectively, after retrieving incomplete information that requires additional monitoring? According to a retrieval orientation model, people use recapitulation whenever they search memory for specific information, regardless of what information might come to mind. In contrast, according to a postretrieval monitoring model, people selectively engage recapitulation only after retrieving ambiguous information in order to evaluate this information and guide additional retrieval attempts. We tested between these models using a criterial recollection task, and by manipulating the strength of ambiguous information associated with to-be-rejected foils (i.e., familiarity or noncriterial information). Replicating prior work, foil rejections were greater when people attempted to recollect targets studied at a semantic level (deep test) compared to an orthographic level (shallow test), implicating more accurate retrieval monitoring. To investigate the role of a recapitulation strategy in this monitoring process, a final test assessed memory for the foils that were earlier processed on these recollection tests. Performance on this foil recognition test suggested that people had engaged in more elaborative content-specific recapitulation when initially tested for deep compared to shallow recollections, and critically, this elaboration effect did not interact with the experimental manipulation of foil strength. These results support the retrieval orientation model, whereby a recapitulation strategy was used to orient retrieval toward specific information during every recollection attempt. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
NEXT Thruster Component Verification Testing
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Sovey, James S.
2007-01-01
Component testing is a critical part of thruster life validation activities under NASA s Evolutionary Xenon Thruster (NEXT) project testing. The high voltage propellant isolators were selected for design verification testing. Even though they are based on a heritage design, design changes were made because the isolators will be operated under different environmental conditions including temperature, voltage, and pressure. The life test of two NEXT isolators was therefore initiated and has accumulated more than 10,000 hr of operation. Measurements to date indicate only a negligibly small increase in leakage current. The cathode heaters were also selected for verification testing. The technology to fabricate these heaters, developed for the International Space Station plasma contactor hollow cathode assembly, was transferred to Aerojet for the fabrication of the NEXT prototype model ion thrusters. Testing the contractor-fabricated heaters is necessary to validate fabrication processes for high reliability heaters. This paper documents the status of the propellant isolator and cathode heater tests.
Testing atomic mass models with radioactive beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.
1989-01-01
Significantly increased yields of new or poorly characterized exotic isotopes that lie far from beta-decay stability can be expected when radioactive beams are used to produce these nuclides. Measurements of the masses of these new species are very important. Such measurements are motivated by the general tendency of mass models to diverge from one another upon excursions from the line of beta-stability. Therefore in these regions (where atomic mass data are presently nonexistent or sparse) the models can be tested rigorously to highlight the features that affect the quality of their short-range and long-range extrapolation properties. Selection of systems tomore » study can be guided, in part, by a desire to probe those mass regions where distinctions among mass models are most apparent and where yields of exotic isotopes, produced via radioactive beams, can be optimized. Identification of models in such regions that have good predictive properties will aid materially in guiding the selection of additional experiments which ultimately will provide expansion of the atomic mass database for further refinement of the mass models. 6 refs., 5 figs.« less
Dong, Zuoli; Zhang, Naiqian; Li, Chun; Wang, Haiyun; Fang, Yun; Wang, Jun; Zheng, Xiaoqi
2015-06-30
An enduring challenge in personalized medicine is to select right drug for individual patients. Testing drugs on patients in large clinical trials is one way to assess their efficacy and toxicity, but it is impractical to test hundreds of drugs currently under development. Therefore the preclinical prediction model is highly expected as it enables prediction of drug response to hundreds of cell lines in parallel. Recently, two large-scale pharmacogenomic studies screened multiple anticancer drugs on over 1000 cell lines in an effort to elucidate the response mechanism of anticancer drugs. To this aim, we here used gene expression features and drug sensitivity data in Cancer Cell Line Encyclopedia (CCLE) to build a predictor based on Support Vector Machine (SVM) and a recursive feature selection tool. Robustness of our model was validated by cross-validation and an independent dataset, the Cancer Genome Project (CGP). Our model achieved good cross validation performance for most drugs in the Cancer Cell Line Encyclopedia (≥80% accuracy for 10 drugs, ≥75% accuracy for 19 drugs). Independent tests on eleven common drugs between CCLE and CGP achieved satisfactory performance for three of them, i.e., AZD6244, Erlotinib and PD-0325901, using expression levels of only twelve, six and seven genes, respectively. These results suggest that drug response could be effectively predicted from genomic features. Our model could be applied to predict drug response for some certain drugs and potentially play a complementary role in personalized medicine.
Talent identification model for sprinter using discriminant factor
NASA Astrophysics Data System (ADS)
Kusnanik, N. W.; Hariyanto, A.; Herdyanto, Y.; Satia, A.
2018-01-01
The main purpose of this study was to identify young talented sprinter using discriminant factor. The research was conducted in 3 steps including item pool, screening of item pool, and trial of instruments at the small and big size of samples. 315 male elementary school students participated in this study with mean age of 11-13 years old. Data were collected by measuring anthropometry (standing height, sitting height, body mass, and leg length); testing physical fitness (40m sprint for speed, shuttle run for agility, standing broad jump for power, multistage fitness test for endurance). Data were analyzed using discriminant factor. The result of this study found that there were 5 items that selected as an instrument to identify young talented sprinter: sitting height, body mass, leg length, sprint 40m, and multistage fitness test. Model of Discriminant for talent identification in sprinter was D = -24,497 + (0,155 sitting height) + (0,080 body mass) + (0,148 leg length) + (-1,225 Sprint 40m) + (0,563 MFT). The conclusion of this study: instrument tests that have been selected and discriminant model that have been found can be applied to identify young talented as a sprinter.
NASA Technical Reports Server (NTRS)
Subramanyam, Guru; Vignesparamoorthy, Sivaruban; Mueller, Carl; VanKeuls, Fred; Warner, Joseph; Miranda, Felix A.
2001-01-01
The main purpose of this work is to study the effect of a selectively etched ferroelectric thin film layer on the performance of an electrically tunable filter. An X-band tunable filter was designed, fabricated and tested on a selectively etched Barium Strontium Titanate (BSTO) ferroelectric thin film layer. Tunable filters with varying lengths of BSTO thin-film in the input and output coupling gaps were modeled, as well as experimentally tested. Experimental results showed that filters with coupling gaps partially filled with BSTO maintained frequency tunability and improved the insertion loss by approx. 2dB. To the best of our knowledge, these results represent the first experimental demonstration of the advantages of selective etching in the performance of thin film ferroelectric-based tunable microwave components.
Optimal test selection for prediction uncertainty reduction
Mullins, Joshua; Mahadevan, Sankaran; Urbina, Angel
2016-12-02
Economic factors and experimental limitations often lead to sparse and/or imprecise data used for the calibration and validation of computational models. This paper addresses resource allocation for calibration and validation experiments, in order to maximize their effectiveness within given resource constraints. When observation data are used for model calibration, the quality of the inferred parameter descriptions is directly affected by the quality and quantity of the data. This paper characterizes parameter uncertainty within a probabilistic framework, which enables the uncertainty to be systematically reduced with additional data. The validation assessment is also uncertain in the presence of sparse and imprecisemore » data; therefore, this paper proposes an approach for quantifying the resulting validation uncertainty. Since calibration and validation uncertainty affect the prediction of interest, the proposed framework explores the decision of cost versus importance of data in terms of the impact on the prediction uncertainty. Often, calibration and validation tests may be performed for different input scenarios, and this paper shows how the calibration and validation results from different conditions may be integrated into the prediction. Then, a constrained discrete optimization formulation that selects the number of tests of each type (calibration or validation at given input conditions) is proposed. Furthermore, the proposed test selection methodology is demonstrated on a microelectromechanical system (MEMS) example.« less
Guillaume, Bryan; Wang, Changqing; Poh, Joann; Shen, Mo Jun; Ong, Mei Lyn; Tan, Pei Fang; Karnani, Neerja; Meaney, Michael; Qiu, Anqi
2018-06-01
Statistical inference on neuroimaging data is often conducted using a mass-univariate model, equivalent to fitting a linear model at every voxel with a known set of covariates. Due to the large number of linear models, it is challenging to check if the selection of covariates is appropriate and to modify this selection adequately. The use of standard diagnostics, such as residual plotting, is clearly not practical for neuroimaging data. However, the selection of covariates is crucial for linear regression to ensure valid statistical inference. In particular, the mean model of regression needs to be reasonably well specified. Unfortunately, this issue is often overlooked in the field of neuroimaging. This study aims to adopt the existing Confounder Adjusted Testing and Estimation (CATE) approach and to extend it for use with neuroimaging data. We propose a modification of CATE that can yield valid statistical inferences using Principal Component Analysis (PCA) estimators instead of Maximum Likelihood (ML) estimators. We then propose a non-parametric hypothesis testing procedure that can improve upon parametric testing. Monte Carlo simulations show that the modification of CATE allows for more accurate modelling of neuroimaging data and can in turn yield a better control of False Positive Rate (FPR) and Family-Wise Error Rate (FWER). We demonstrate its application to an Epigenome-Wide Association Study (EWAS) on neonatal brain imaging and umbilical cord DNA methylation data obtained as part of a longitudinal cohort study. Software for this CATE study is freely available at http://www.bioeng.nus.edu.sg/cfa/Imaging_Genetics2.html. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Agatha: Disentangling period signals from correlated noise in a periodogram framework
NASA Astrophysics Data System (ADS)
Feng, F.; Tuomi, M.; Jones, H. R. A.
2018-04-01
Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.
Experimental study of a generic high-speed civil transport
NASA Technical Reports Server (NTRS)
Belton, Pamela S.; Campbell, Richard L.
1992-01-01
An experimental study of generic high-speed civil transport was conducted in the NASA Langley 8-ft Transonic Pressure Tunnel. The data base was obtained for the purpose of assessing the accuracy of various levels of computational analysis. Two models differing only in wingtip geometry were tested with and without flow-through nacelles. The baseline model has a curved or crescent wingtip shape, while the second model has a more conventional straight wingtip shape. The study was conducted at Mach numbers from 0.30 to 1.19. Force data were obtained on both the straight wingtip model and the curved wingtip model. Only the curved wingtip model was instrumented for measuring pressures. Selected longitudinal, lateral, and directional data are presented for both models. Selected pressure distributions for the curved wingtip model are also presented.
Ecological and personal predictors of science achievement in an urban center
NASA Astrophysics Data System (ADS)
Guidubaldi, John Michael
This study sought to examine selected personal and environmental factors that predict urban students' achievement test scores on the science subject area of the Ohio standardized test. Variables examined were in the general categories of teacher/classroom, student, and parent/home. It assumed that these clusters might add independent variance to a best predictor model, and that discovering relative strength of different predictors might lead to better selection of intervention strategies to improve student performance. This study was conducted in an urban school district and was comprised of teachers and students enrolled in ninth grade science in three of this district's high schools. Consenting teachers (9), students (196), and parents (196) received written surveys with questions designed to examine the predictive power of each variable cluster. Regression analyses were used to determine which factors best correlate with student scores and classroom science grades. Selected factors were then compiled into a best predictive model, predicting success on standardized science tests. Students t tests of gender and racial subgroups confirmed that there were racial differences in OPT scores, and both gender and racial differences in science grades. Additional examinations were therefore conducted for all 12 variables to determine whether gender and race had an impact on the strength of individual variable predictions and on the final best predictor model. Of the 15 original OPT and cluster variable hypotheses, eight showed significant positive relationships that occurred in the expected direction. However, when more broadly based end-of-the-year science class grade was used as a criterion, 13 of the 15 hypotheses showed significant relationships in the expected direction. With both criteria, significant gender and racial differences were observed in the strength of individual predictors and in the composition of best predictor models.
Suvorov, Anton; Jensen, Nicholas O; Sharkey, Camilla R; Fujimoto, M Stanley; Bodily, Paul; Wightman, Haley M Cahill; Ogden, T Heath; Clement, Mark J; Bybee, Seth M
2017-03-01
Gene duplication plays a central role in adaptation to novel environments by providing new genetic material for functional divergence and evolution of biological complexity. Several evolutionary models have been proposed for gene duplication to explain how new gene copies are preserved by natural selection, but these models have rarely been tested using empirical data. Opsin proteins, when combined with a chromophore, form a photopigment that is responsible for the absorption of light, the first step in the phototransduction cascade. Adaptive gene duplications have occurred many times within the animal opsins' gene family, leading to novel wavelength sensitivities. Consequently, opsins are an attractive choice for the study of gene duplication evolutionary models. Odonata (dragonflies and damselflies) have the largest opsin repertoire of any insect currently known. Additionally, there is tremendous variation in opsin copy number between species, particularly in the long-wavelength-sensitive (LWS) class. Using comprehensive phylotranscriptomic and statistical approaches, we tested various evolutionary models of gene duplication. Our results suggest that both the blue-sensitive (BS) and LWS opsin classes were subjected to strong positive selection that greatly weakens after multiple duplication events, a pattern that is consistent with the permanent heterozygote model. Due to the immense interspecific variation and duplicability potential of opsin genes among odonates, they represent a unique model system to test hypotheses regarding opsin gene duplication and diversification at the molecular level. © 2016 John Wiley & Sons Ltd.
Moore, Holly; Geyer, Mark A; Carter, Cameron S; Barch, Deanna M
2013-11-01
Over the past two decades, the awareness of the disabling and treatment-refractory effects of impaired cognition in schizophrenia has increased dramatically. In response to this still unmet need in the treatment of schizophrenia, the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) initiative was developed. The goal of CNTRICS is to harness cognitive neuroscience to develop a brain-based set of tools for measuring cognition in schizophrenia and to test new treatments. CNTRICS meetings focused on development of tasks with cognitive construct validity for use in both human and animal model studies. This special issue presents papers discussing the cognitive testing paradigms selected by CNTRICS for animal model systems. These paradigms are designed to measure cognitive constructs within the domains of perception, attention, executive function, working memory, object/relational long-term memory, and social/affective processes. Copyright © 2013. Published by Elsevier Ltd.
Moore, Holly; Geyer, Mark A.; Carter, Cameron S.; Barch, Deanna M.
2014-01-01
Over the past two decades, the awareness of the disabling and treatment-refractory effects of impaired cognition in schizophrenia has increased dramatically. In response to this still unmet need in the treatment of schizophrenia, the Cognitive Neuroscience Treatment Research to Improve Cognition in Schizophrenia (CNTRICS) initiative was developed. The goal of CNTRICS is to harness cognitive neuroscience to develop a brain-based set of tools for measuring cognition in schizophrenia and to test new treatments. CNTRICS meetings focused on development of tasks with cognitive construct validity for use in both human and animal model studies. This special issue presents papers discussing the cognitive testing paradigms selected by CNTRICS for animal model systems. These paradigms are designed to measure cognitive constructs within the domains of perception, attention, executive function, working memory, object/relational long-term memory, and social/affective processes. PMID:24090823
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Royston, Patrick; Sauerbrei, Willi
2016-01-01
In a recent article, Royston (2015, Stata Journal 15: 275-291) introduced the approximate cumulative distribution (acd) transformation of a continuous covariate x as a route toward modeling a sigmoid relationship between x and an outcome variable. In this article, we extend the approach to multivariable modeling by modifying the standard Stata program mfp. The result is a new program, mfpa, that has all the features of mfp plus the ability to fit a new model for user-selected covariates that we call fp1( p 1 , p 2 ). The fp1( p 1 , p 2 ) model comprises the best-fitting combination of a dimension-one fractional polynomial (fp1) function of x and an fp1 function of acd ( x ). We describe a new model-selection algorithm called function-selection procedure with acd transformation, which uses significance testing to attempt to simplify an fp1( p 1 , p 2 ) model to a submodel, an fp1 or linear model in x or in acd ( x ). The function-selection procedure with acd transformation is related in concept to the fsp (fp function-selection procedure), which is an integral part of mfp and which is used to simplify a dimension-two (fp2) function. We describe the mfpa command and give univariable and multivariable examples with real data to demonstrate its use.
NASA Astrophysics Data System (ADS)
Wang, Wenlei; Wu, Xiaojie; Wang, Chao; Jia, Zhaojun; He, Linwen; Wei, Yifan; Niu, Jianfeng; Wang, Guangce
2014-07-01
To screen the stable expression genes related to the stress (strong light, dehydration and temperature shock) we applied Absolute real-time PCR technology to determine the transcription numbers of the selected test genes in P orphyra yezoensis, which has been regarded as a potential model species responding the stress conditions in the intertidal. Absolute real-time PCR technology was applied to determine the transcription numbers of the selected test genes in P orphyra yezoensis, which has been regarded as a potential model species in stress responding. According to the results of photosynthesis parameters, we observed that Y(II) and F v/ F m were significantly affected when stress was imposed on the thalli of P orphyra yezoensis, but underwent almost completely recovered under normal conditions, which were collected for the following experiments. Then three samples, which were treated with different grade stresses combined with salinity, irradiation and temperature, were collected. The transcription numbers of seven constitutive expression genes in above samples were determined after RNA extraction and cDNA synthesis. Finally, a general insight into the selection of internal control genes during stress response was obtained. We found that there were no obvious effects in terms of salinity stress (at salinity 90) on transcription of most genes used in the study. The 18S ribosomal RNA gene had the highest expression level, varying remarkably among different tested groups. RPS8 expression showed a high irregular variance between samples. GAPDH presented comparatively stable expression and could thus be selected as the internal control. EF-1α showed stable expression during the series of multiple-stress tests. Our research provided available references for the selection of internal control genes for transcripts determination of P. yezoensis.
Selective adsorption of flavor-active components on hydrophobic resins.
Saffarionpour, Shima; Sevillano, David Mendez; Van der Wielen, Luuk A M; Noordman, T Reinoud; Brouwer, Eric; Ottens, Marcel
2016-12-09
This work aims to propose an optimum resin that can be used in industrial adsorption process for tuning flavor-active components or removal of ethanol for producing an alcohol-free beer. A procedure is reported for selective adsorption of volatile aroma components from water/ethanol mixtures on synthetic hydrophobic resins. High throughput 96-well microtiter-plates batch uptake experimentation is applied for screening resins for adsorption of esters (i.e. isoamyl acetate, and ethyl acetate), higher alcohols (i.e. isoamyl alcohol and isobutyl alcohol), a diketone (diacetyl) and ethanol. The miniaturized batch uptake method is adapted for adsorption of volatile components, and validated with column breakthrough analysis. The results of single-component adsorption tests on Sepabeads SP20-SS are expressed in single-component Langmuir, Freundlich, and Sips isotherm models and multi-component versions of Langmuir and Sips models are applied for expressing multi-component adsorption results obtained on several tested resins. The adsorption parameters are regressed and the selectivity over ethanol is calculated for each tested component and tested resin. Resin scores for four different scenarios of selective adsorption of esters, higher alcohols, diacetyl, and ethanol are obtained. The optimal resin for adsorption of esters is Sepabeads SP20-SS with resin score of 87% and for selective removal of higher alcohols, XAD16N, and XAD4 from Amberlite resin series are proposed with scores of 80 and 74% respectively. For adsorption of diacetyl, XAD16N and XAD4 resins with score of 86% are the optimum choice and Sepabeads SP2MGS and XAD761 resins showed the highest affinity towards ethanol. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Denton, Holly M.
A study tested a model of organizational variables that earlier research had identified as important in influencing what model(s) of public relations an organization selects. Models of public relations (as outlined by J. Grunig and Hunt in 1984) are defined as either press agentry, public information, two-way asymmetrical, or two-way symmetrical.…
Arc Jet Testing of Carbon Phenolic for Mars Sample Return and Future NASA Missions
NASA Technical Reports Server (NTRS)
Laub, Bernard; Chen, Yih-Kanq; Skokova, Kristina; Delano, Chad
2004-01-01
The objective of the Mars Sample Return (MSR) Mission is to return a sample of MArtian soil to Earth. The Earth Entry Vehicle (EEV) brings te samples through the atmosphere to the ground.The program aims to: Model aerothermal environment during EEV flight; On the basis of results, select potential TPS materials for EEV forebody; Fabricate TPS materials; Test the materials in the arc jet environment representative of predicted flight environment;Evaluate material performance; Compare results of modeling predictions with test results.
Shirazi, Mohammadali; Dhavala, Soma Sekhar; Lord, Dominique; Geedipally, Srinivas Reddy
2017-10-01
Safety analysts usually use post-modeling methods, such as the Goodness-of-Fit statistics or the Likelihood Ratio Test, to decide between two or more competitive distributions or models. Such metrics require all competitive distributions to be fitted to the data before any comparisons can be accomplished. Given the continuous growth in introducing new statistical distributions, choosing the best one using such post-modeling methods is not a trivial task, in addition to all theoretical or numerical issues the analyst may face during the analysis. Furthermore, and most importantly, these measures or tests do not provide any intuitions into why a specific distribution (or model) is preferred over another (Goodness-of-Logic). This paper ponders into these issues by proposing a methodology to design heuristics for Model Selection based on the characteristics of data, in terms of descriptive summary statistics, before fitting the models. The proposed methodology employs two analytic tools: (1) Monte-Carlo Simulations and (2) Machine Learning Classifiers, to design easy heuristics to predict the label of the 'most-likely-true' distribution for analyzing data. The proposed methodology was applied to investigate when the recently introduced Negative Binomial Lindley (NB-L) distribution is preferred over the Negative Binomial (NB) distribution. Heuristics were designed to select the 'most-likely-true' distribution between these two distributions, given a set of prescribed summary statistics of data. The proposed heuristics were successfully compared against classical tests for several real or observed datasets. Not only they are easy to use and do not need any post-modeling inputs, but also, using these heuristics, the analyst can attain useful information about why the NB-L is preferred over the NB - or vice versa- when modeling data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dielectric breakdown of additively manufactured polymeric materials
Monzel, W. Jacob; Hoff, Brad W.; Maestas, Sabrina S.; ...
2016-01-11
Dielectric strength testing of selected Polyjet-printed polymer plastics was performed in accordance with ASTM D149. This dielectric strength data is compared to manufacturer-provided dielectric strength data for selected plastics printed using the stereolithography (SLA), fused deposition modeling (FDM), and selective laser sintering (SLS) methods. Tested Polyjet samples demonstrated dielectric strengths as high as 47.5 kV/mm for a 0.5 mm thick sample and 32.1 kV/mm for a 1.0 mm sample. As a result, the dielectric strength of the additively manufactured plastics evaluated as part of this study was lower than the majority of non-printed plastics by at least 15% (with themore » exception of polycarbonate).« less
Dielectric breakdown of additively manufactured polymeric materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monzel, W. Jacob; Hoff, Brad W.; Maestas, Sabrina S.
Dielectric strength testing of selected Polyjet-printed polymer plastics was performed in accordance with ASTM D149. This dielectric strength data is compared to manufacturer-provided dielectric strength data for selected plastics printed using the stereolithography (SLA), fused deposition modeling (FDM), and selective laser sintering (SLS) methods. Tested Polyjet samples demonstrated dielectric strengths as high as 47.5 kV/mm for a 0.5 mm thick sample and 32.1 kV/mm for a 1.0 mm sample. As a result, the dielectric strength of the additively manufactured plastics evaluated as part of this study was lower than the majority of non-printed plastics by at least 15% (with themore » exception of polycarbonate).« less
Prediction of Baseflow Index of Catchments using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Yadav, B.; Hatfield, K.
2017-12-01
We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.
Thrust imbalance of the Space Shuttle solid rocket motors
NASA Technical Reports Server (NTRS)
Foster, W. A., Jr.; Sforzini, R. H.; Shackelford, B. W., Jr.
1981-01-01
The Monte Carlo statistical analysis of thrust imbalance is applied to both the Titan IIIC and the Space Shuttle solid rocket motors (SRMs) firing in parallel, and results are compared with those obtained from the Space Shuttle program. The test results are examined in three phases: (1) pairs of SRMs selected from static tests of the four developmental motors (DMs 1 through 4); (2) pairs of SRMs selected from static tests of the three quality assurance motors (QMs 1 through 3); (3) SRMs on the first flight test vehicle (STS-1A and STS-1B). The simplified internal ballistic model utilized for computing thrust from head-end pressure measurements on flight tests is shown to agree closely with measured thrust data. Inaccuracies in thrust imbalance evaluation are explained by possible flight test instrumentation errors.
1981-01-01
explanatory variable has been ommitted. Ramsey (1974) has developed a rather interesting test for detecting specification errors using estimates of the...Peter. (1979) A Guide to Econometrics , Cambridge, MA: The MIT Press. Ramsey , J.B. (1974), "Classical Model Selection Through Specification Error... Tests ," in P. Zarembka, Ed. Frontiers in Econometrics , New York: Academia Press. Theil, Henri. (1971), Principles of Econometrics , New York: John Wiley
Experimental Aerodynamic Facilities of the Aerodynamics Research and Concepts Assistance Section
1983-02-01
experimental data desired. Internal strain gage balances covering a range of sizes and load capabilities are available for static force and moment tests...tunnel. Both sting and side wall model mounts are available which can be adapted to a variety of internal strain gage balance systems for force and...model components or liquids in the test section. A selection of internal and external strain gage balances and associated mounting fixtures are
Transformation Abilities: A Reanalysis and Confirmation of SOI Theory.
ERIC Educational Resources Information Center
Khattab, Ali-Maher; And Others
1987-01-01
Confirmatory factor analysis was used to reanalyze correlational data from selected variables in Guilford's Aptitudes Research Project. Results indicated Guilford's model reproduced the original correlation matrix more closely than other models. Most of Guilford's tests indicated high loadings on their hypothesized factors. (GDC)
A comparison of WEC control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, David G.; Bacelli, Giorgio; Coe, Ryan Geoffrey
2016-04-01
The operation of Wave Energy Converter (WEC) devices can pose many challenging problems to the Water Power Community. A key research question is how to significantly improve the performance of these WEC devices through improving the control system design. This report summarizes an effort to analyze and improve the performance of WEC through the design and implementation of control systems. Controllers were selected to span the WEC control design space with the aim of building a more comprehensive understanding of different controller capabilities and requirements. To design and evaluate these control strategies, a model scale test-bed WEC was designed formore » both numerical and experimental testing (see Section 1.1). Seven control strategies have been developed and applied on a numerical model of the selected WEC. This model is capable of performing at a range of levels, spanning from a fully-linear realization to varying levels of nonlinearity. The details of this model and its ongoing development are described in Section 1.2.« less
Life Modeling and Design Analysis for Ceramic Matrix Composite Materials
NASA Technical Reports Server (NTRS)
2005-01-01
The primary research efforts focused on characterizing and modeling static failure, environmental durability, and creep-rupture behavior of two classes of ceramic matrix composites (CMC), silicon carbide fibers in a silicon carbide matrix (SiC/SiC) and carbon fibers in a silicon carbide matrix (C/SiC). An engineering life prediction model (Probabilistic Residual Strength model) has been developed specifically for CMCs. The model uses residual strength as the damage metric for evaluating remaining life and is posed probabilistically in order to account for the stochastic nature of the material s response. In support of the modeling effort, extensive testing of C/SiC in partial pressures of oxygen has been performed. This includes creep testing, tensile testing, half life and residual tensile strength testing. C/SiC is proposed for airframe and propulsion applications in advanced reusable launch vehicles. Figures 1 and 2 illustrate the models predictive capabilities as well as the manner in which experimental tests are being selected in such a manner as to ensure sufficient data is available to aid in model validation.
Bendall, Sarah; Hulbert, Carol Anne; Alvarez-Jimenez, Mario; Allott, Kelly; McGorry, Patrick D; Jackson, Henry James
2013-11-01
Several theories suggest that posttraumatic intrusive symptoms are central to the relationship between childhood trauma (CT) and hallucinations and delusions in psychosis. Biased selective attention has been implicated as a cognitive process underlying posttraumatic intrusions. The current study sought to test theories of the relationship between childhood sexual abuse (CSA), hallucinations and delusions, posttraumatic intrusions, and selective attention in first-episode psychosis (FEP). Twenty-eight people with FEP and 21 nonclinical controls were assessed for CT and psychotic and posttraumatic stress symptoms and completed an emotional Stroop test using CSA-related and other words. Those with FEP and CSA had more severe hallucinations and delusions than those with FEP and without CSA. They also reported posttraumatic intrusions at clinical levels and showed selective attention to CSA-related words. The results are consistent with the posttraumatic intrusions account of hallucinations and delusions in those with CSA and psychosis.
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
ERIC Educational Resources Information Center
Cantor, Jeffrey A.; Hobson, Edward N.
The development of a test design methodology used to construct a criterion-referenced System Achievement Test (CR-SAT) for selected Naval enlisted classification (NEC) in the Strategic Weapon System (SWS) of the United States Navy is described. Subject matter experts, training data analysts and educational specialists developed a comprehensive…
Basalt models for the Mars penetrator mission: Geology of the Amboy Lava Field, California
NASA Technical Reports Server (NTRS)
Greeley, R.; Bunch, T. E.
1976-01-01
Amboy lava field (San Bernardino County, California) is a Holocene basalt flow selected as a test site for potential Mars Penetrators. A discussion is presented of (1) the general relations of basalt flow features and textures to styles of eruptions on earth, (2) the types of basalt flows likely to be encountered on Mars and the rationale for selection of the Amboy lava field as a test site, (3) the general geology of the Amboy lava field, and (4) detailed descriptions of the target sites at Amboy lava field.
Previous studies indicate that freshwater mollusks are more sensitive than commonly tested organisms to some chemicals, such as copper and ammonia. Nevertheless, mollusks are generally under-represented in toxicity databases. Studies are needed to generate data with which to comp...
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal models such as the Local Lymph Node Assay (LLNA) and the Guinea Pig Maximization Test (GPMT). In recent years, EU regulations have provided a strong incentiv...
Javed, Faizan; Chan, Gregory S H; Savkin, Andrey V; Middleton, Paul M; Malouf, Philip; Steel, Elizabeth; Mackie, James; Lovell, Nigel H
2009-01-01
This paper uses non-linear support vector regression (SVR) to model the blood volume and heart rate (HR) responses in 9 hemodynamically stable kidney failure patients during hemodialysis. Using radial bias function (RBF) kernels the non-parametric models of relative blood volume (RBV) change with time as well as percentage change in HR with respect to RBV were obtained. The e-insensitivity based loss function was used for SVR modeling. Selection of the design parameters which includes capacity (C), insensitivity region (e) and the RBF kernel parameter (sigma) was made based on a grid search approach and the selected models were cross-validated using the average mean square error (AMSE) calculated from testing data based on a k-fold cross-validation technique. Linear regression was also applied to fit the curves and the AMSE was calculated for comparison with SVR. For the model based on RBV with time, SVR gave a lower AMSE for both training (AMSE=1.5) as well as testing data (AMSE=1.4) compared to linear regression (AMSE=1.8 and 1.5). SVR also provided a better fit for HR with RBV for both training as well as testing data (AMSE=15.8 and 16.4) compared to linear regression (AMSE=25.2 and 20.1).
Reponen, Tiina; Lee, Shu-An; Grinshpun, Sergey A; Johnson, Erik; McKay, Roy
2011-04-01
This study investigated particle-size-selective protection factors (PFs) of four models of N95 filtering facepiece respirators (FFRs) that passed and failed fit testing. Particle size ranges were representative of individual viruses and bacteria (aerodynamic diameter d(a) = 0.04-1.3 μm). Standard respirator fit testing was followed by particle-size-selective measurement of PFs while subjects wore N95 FFRs in a test chamber. PF values obtained for all subjects were then compared to those obtained for the subjects who passed the fit testing. Overall fit test passing rate for all four models of FFRs was 67%. Of these, 29% had PFs <10 (the Occupational Safety and Health Administration Assigned Protection Factor designated for this type of respirator). When only subjects that passed fit testing were included, PFs improved with 9% having values <10. On average, the PFs were 1.4 times (29.5/21.5) higher when only data for those who passed fit testing were included. The minimum PFs were consistently observed in the particle size range of 0.08-0.2 μm. Overall PFs increased when subjects passed fit testing. The results support the value of fit testing but also show for the first time that PFs are dependent on particle size regardless of fit testing status.
DECIDE: a software for computer-assisted evaluation of diagnostic test performance.
Chiecchio, A; Bo, A; Manzone, P; Giglioli, F
1993-05-01
The evaluation of the performance of clinical tests is a complex problem involving different steps and many statistical tools, not always structured in an organic and rational system. This paper presents a software which provides an organic system of statistical tools helping evaluation of clinical test performance. The program allows (a) the building and the organization of a working database, (b) the selection of the minimal set of tests with the maximum information content, (c) the search of the model best fitting the distribution of the test values, (d) the selection of optimal diagnostic cut-off value of the test for every positive/negative situation, (e) the evaluation of performance of the combinations of correlated and uncorrelated tests. The uncertainty associated with all the variables involved is evaluated. The program works in a MS-DOS environment with EGA or higher performing graphic card.
NASA Astrophysics Data System (ADS)
Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin
2013-07-01
The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.
Brown, Andrew D; Marotta, Thomas R
2017-02-01
Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Botvinick, Matthew M.; Buxbaum, Laurel J.; Bylsma, Lauren M.; Jax, Steven A.
2014-01-01
The act of reaching for and acting upon an object involves two forms of selection: selection of the object as a target, and selection of the action to be performed. While these two forms of selection are logically dissociable, and are evidently subserved by separable neural pathways, they must also be closely coordinated. We examine the nature of this coordination by developing and analyzing a computational model of object and action selection first proposed by Ward [Ward, R. (1999). Interactions between perception and action systems: a model for selective action. In G. W. Humphreys, J. Duncan, & A. Treisman (Eds.), Attention, Space and Action: Studies in Cognitive Neuroscience. Oxford: Oxford University Press]. An interesting tenet of this account, which we explore in detail, is that the interplay between object and action selection depends critically on top-down inputs representing the current task set or plan of action. A concrete manifestation of this, established through a series of simulations, is that the impact of distractor objects on reaching times can vary depending on the nature of the current action plan. In order to test the model's predictions in this regard, we conducted two experiments, one involving direct object manipulation, the other involving tool-use. In both experiments we observed the specific interaction between task set and distractor type predicted by the model. Our findings provide support for the computational model, and more broadly for an interactive account of object and action selection. PMID:19100758
ERIC Educational Resources Information Center
Nehm, Ross H.; Haertig, Hendrik
2012-01-01
Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with…
ERIC Educational Resources Information Center
Roberts, William L.; Pugliano, Gina; Langenau, Erik; Boulet, John R.
2012-01-01
Medical schools employ a variety of preadmission measures to select students most likely to succeed in the program. The Medical College Admission Test (MCAT) and the undergraduate college grade point average (uGPA) are two academic measures typically used to select students in medical school. The assumption that presently used preadmission…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Prabakar, Kumaraguru; Nagarajan, Adarsh
As more grid-connected photovoltaic (PV) inverters become compliant with evolving interconnections requirements, there is increased interest from utilities in understanding how to best deploy advanced grid-support functions (GSF) in the field. One efficient and cost-effective method to examine such deployment options is to leverage power hardware-in-the-loop (PHIL) testing methods. Two Hawaiian Electric feeder models were converted to real-time models in the OPAL-RT real-time digital testing platform, and integrated with models of GSF capable PV inverters that were modeled from characterization test data. The integrated model was subsequently used in PHIL testing to evaluate the effects of different fixed power factormore » and volt-watt control settings on voltage regulation of the selected feeders. The results of this study were provided as inputs for field deployment and technical interconnection requirements for grid-connected PV inverters on the Hawaiian Islands.« less
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
Evidence-based selection process to the Master of Public Health program at Medical University.
Panczyk, Mariusz; Juszczyk, Grzegorz; Zarzeka, Aleksander; Samoliński, Łukasz; Belowska, Jarosława; Cieślak, Ilona; Gotlib, Joanna
2017-09-11
Evaluation of the predictive validity of selected sociodemographic factors and admission criteria for Master's studies in Public Health at the Faculty of Health Sciences, Medical University of Warsaw (MUW). For the evaluation purposes recruitment data and learning results of students enrolled between 2008 and 2012 were used (N = 605, average age 22.9 ± 3.01). The predictive analysis was performed using the multiple linear regression method. In the proposed regression model 12 predictors were selected, including: sex, age, professional degree (BA), the Bachelor's studies grade point average (GPA), total score of the preliminary examination broken down into five thematic areas. Depending on the tested model, one of two dependent variables was used: first-year GPA or cumulative GPA in the Master program. The regression model based on the result variable of Master's GPA program was better matched to data in comparison to the model based on the first year GPA (adjusted R 2 0.413 versus 0.476 respectively). The Bachelor's studies GPA and each of the five subtests comprising the test entrance exam were significant predictors of success achieved by a student both after the first year and at the end of the course of studies. Criteria of admissions with total score of MCQs exam and Bachelor's studies GPA can be successfully used for selection of the candidates for Master's degree studies in Public Health. The high predictive validity of the recruitment system confirms the validity of the adopted admission policy at MUW.
Selection based on the size of the black tie of the great tit may be reversed in urban habitats.
Senar, Juan Carlos; Conroy, Michael J; Quesada, Javier; Mateos-Gonzalez, Fernando
2014-07-01
A standard approach to model how selection shapes phenotypic traits is the analysis of capture-recapture data relating trait variation to survival. Divergent selection, however, has never been analyzed by the capture-recapture approach. Most reported examples of differences between urban and nonurban animals reflect behavioral plasticity rather than divergent selection. The aim of this paper was to use a capture-recapture approach to test the hypothesis that divergent selection can also drive local adaptation in urban habitats. We focused on the size of the black breast stripe (i.e., tie width) of the great tit (Parus major), a sexual ornament used in mate choice. Urban great tits display smaller tie sizes than forest birds. Because tie size is mostly genetically determined, it could potentially respond to selection. We analyzed capture/recapture data of male great tits in Barcelona city (N = 171) and in a nearby (7 km) forest (N = 324) from 1992 to 2008 using MARK. When modelling recapture rate, we found it to be strongly influenced by tie width, so that both for urban and forest habitats, birds with smaller ties were more trap-shy and more cautious than their larger tied counterparts. When modelling survival, we found that survival prospects in forest great tits increased the larger their tie width (i.e., directional positive selection), but the reverse was found for urban birds, with individuals displaying smaller ties showing higher survival (i.e., directional negative selection). As melanin-based tie size seems to be related to personality, and both are heritable, results may be explained by cautious personalities being favored in urban environments. More importantly, our results show that divergent selection can be an important mechanism in local adaptation to urban habitats and that capture-recapture is a powerful tool to test it.
Criaud, Marion; Longcamp, Marieke; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Sescousse, Guillaume; Strafella, Antonio P; Ballanger, Bénédicte; Boulinguez, Philippe
2017-08-30
The neural mechanisms underlying response inhibition and related disorders are unclear and controversial for several reasons. First, it is a major challenge to assess the psychological bases of behaviour, and ultimately brain-behaviour relationships, of a function which is precisely intended to suppress overt measurable behaviours. Second, response inhibition is difficult to disentangle from other parallel processes involved in more general aspects of cognitive control. Consequently, different psychological and anatomo-functional models coexist, which often appear in conflict with each other even though they are not necessarily mutually exclusive. The standard model of response inhibition in go/no-go tasks assumes that inhibitory processes are reactively and selectively triggered by the stimulus that participants must refrain from reacting to. Recent alternative models suggest that action restraint could instead rely on reactive but non-selective mechanisms (all automatic responses are automatically inhibited in uncertain contexts) or on proactive and non-selective mechanisms (a gating function by which reaction to any stimulus is prevented in anticipation of stimulation when the situation is unpredictable). Here, we assessed the physiological plausibility of these different models by testing their respective predictions regarding event-related BOLD modulations (forward inference using fMRI). We set up a single fMRI design which allowed for us to record simultaneously the different possible forms of inhibition while limiting confounds between response inhibition and parallel cognitive processes. We found BOLD dynamics consistent with non-selective models. These results provide new theoretical and methodological lines of inquiry for the study of basic functions involved in behavioural control and related disorders. Copyright © 2017 Elsevier B.V. All rights reserved.
Kebede, Biniam T; Grauwet, Tara; Magpusao, Johannes; Palmers, Stijn; Michiels, Chris; Hendrickx, Marc; Loey, Ann Van
2015-07-15
To have a better understanding of chemical reactions during shelf-life, an integrated analytical and engineering toolbox: "fingerprinting-kinetics" was used. As a case study, a thermally sterilised carrot puree was selected. Sterilised purees were stored at four storage temperatures as a function of time. Fingerprinting enabled selection of volatiles clearly changing during shelf-life. Only these volatiles were identified and studied further. Next, kinetic modelling was performed to investigate the suitability of these volatiles as quality indices (markers) for accelerated shelf-life testing (ASLT). Fingerprinting enabled selection of terpenoids, phenylpropanoids, fatty acid derivatives, Strecker aldehydes and sulphur compounds as volatiles clearly changing during shelf-life. The amount of Strecker aldehydes increased during storage, whereas the rest of the volatiles decreased. Out of the volatiles, based on the applied kinetic modelling, myristicin, α-terpinolene, β-pinene, α-terpineol and octanal were identified as potential markers for ASLT. Copyright © 2015 Elsevier Ltd. All rights reserved.
Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.
Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing
2009-03-06
Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.
Modeling the Restraint of Liquid Jets by Surface Tension in Microgravity
NASA Technical Reports Server (NTRS)
Chato, David J.; Jacqmim, David A.
2001-01-01
An axisymmetric phase field model is developed and used to model surface tension forces on liquid jets in microgravity. The previous work in this area is reviewed and a baseline drop tower experiment selected 'for model comparison. A mathematical model is developed which includes a free surface. a symmetric centerline and wall boundaries with given contact angles. The model is solved numerically with a compact fourth order stencil on a equally spaced axisymmetric grid. After grid convergence studies, a grid is selected and all drop tower tests modeled. Agreement was assessed by comparing predicted and measured free surface rise. Trend wise agreement is good but agreement in magnitude is only fair. Suspected sources of disagreement are suspected to be lack of a turbulence model and the existence of slosh baffles in the experiment which were not included in the model.
A wave model test bed study for wave energy resource characterization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Zhaoqing; Neary, Vincent S.; Wang, Taiping
This paper presents a test bed study conducted to evaluate best practices in wave modeling to characterize energy resources. The model test bed off the central Oregon Coast was selected because of the high wave energy and available measured data at the site. Two third-generation spectral wave models, SWAN and WWIII, were evaluated. A four-level nested-grid approach—from global to test bed scale—was employed. Model skills were assessed using a set of model performance metrics based on comparing six simulated wave resource parameters to observations from a wave buoy inside the test bed. Both WWIII and SWAN performed well at themore » test bed site and exhibited similar modeling skills. The ST4 package with WWIII, which represents better physics for wave growth and dissipation, out-performed ST2 physics and improved wave power density and significant wave height predictions. However, ST4 physics tended to overpredict the wave energy period. The newly developed ST6 physics did not improve the overall model skill for predicting the six wave resource parameters. Sensitivity analysis using different wave frequencies and direction resolutions indicated the model results were not sensitive to spectral resolutions at the test bed site, likely due to the absence of complex bathymetric and geometric features.« less
Progressive Aerodynamic Model Identification From Dynamic Water Tunnel Test of the F-16XL Aircraft
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Klein, Vladislav; Szyba, Nathan M.
2004-01-01
Development of a general aerodynamic model that is adequate for predicting the forces and moments in the nonlinear and unsteady portions of the flight envelope has not been accomplished to a satisfactory degree. Predicting aerodynamic response during arbitrary motion of an aircraft over the complete flight envelope requires further development of the mathematical model and the associated methods for ground-based testing in order to allow identification of the model. In this study, a general nonlinear unsteady aerodynamic model is presented, followed by a summary of a linear modeling methodology that includes test and identification methods, and then a progressive series of steps suggesting a roadmap to develop a general nonlinear methodology that defines modeling, testing, and identification methods. Initial steps of the general methodology were applied to static and oscillatory test data to identify rolling-moment coefficient. Static measurements uncovered complicated dependencies of the aerodynamic coefficient on angle of attack and sideslip in the stall region making it difficult to find a simple analytical expression for the measurement data. In order to assess the effect of sideslip on the damping and unsteady terms, oscillatory tests in roll were conducted at different values of an initial offset in sideslip. Candidate runs for analyses were selected where higher order harmonics were required for the model and where in-phase and out-of-phase components varied with frequency. From these results it was found that only data in the angle-of-attack range of 35 degrees to 37.5 degrees met these requirements. From the limited results it was observed that the identified models fit the data well and both the damping-in-roll and the unsteady term gain are decreasing with increasing sideslip and motion amplitude. Limited similarity between parameter values in the nonlinear model and the linear model suggest that identifiability of parameters in both terms may be a problem. However, the proposed methodology can still be used with careful experiment design and carefully selected values of angle of attack, sideslip, amplitude, and frequency of the oscillatory data.
Korhonen, L E; Turpeinen, M; Rahnasto, M; Wittekindt, C; Poso, A; Pelkonen, O; Raunio, H; Juvonen, R O
2007-01-01
Background and purpose: The cytochrome P450 2B6 (CYP2B6) enzyme metabolises a number of clinically important drugs. Drug-drug interactions resulting from inhibition or induction of CYP2B6 activity may cause serious adverse effects. The aims of this study were to construct a three-dimensional structure-activity relationship (3D-QSAR) model of the CYP2B6 protein and to identify novel potent and selective inhibitors of CYP2B6 for in vitro research purposes. Experimental approach: The inhibition potencies (IC50 values) of structurally diverse chemicals were determined with recombinant human CYP2B6 enzyme. Two successive models were constructed using Comparative Molecular Field Analysis (CoMFA). Key results: Three compounds proved to be very potent and selective competitive inhibitors of CYP2B6 in vitro (IC50<1 μM): 4-(4-chlorobenzyl)pyridine (CBP), 4-(4-nitrobenzyl)pyridine (NBP), and 4-benzylpyridine (BP). A complete inhibition of CYP2B6 activity was achieved with 0.1 μM CBP, whereas other CYP-related activities were not affected. Forty-one compounds were selected for further testing and construction of the final CoMFA model. The created CoMFA model was of high quality and predicted accurately the inhibition potency of a test set (n=7) of structurally diverse compounds. Conclusions and implications: Two CoMFA models were created which revealed the key molecular characteristics of inhibitors of the CYP2B6 enzyme. The final model accurately predicted the inhibitory potencies of several structurally unrelated compounds. CBP, BP and NBP were identified as novel potent and selective inhibitors of CYP2B6 and CBP especially is a suitable inhibitor for in vitro screening studies. PMID:17325652
NASA Astrophysics Data System (ADS)
Fugett, James H.; Bennett, Haydon E.; Shrout, Joshua L.; Coad, James E.
2017-02-01
Expansions in minimally invasive medical devices and technologies with thermal mechanisms of action are continuing to advance the practice of medicine. These expansions have led to an increasing need for appropriate animal models to validate and quantify device performance. The planning of these studies should take into consideration a variety of parameters, including the appropriate animal model (test system - ex vivo or in vivo; species; tissue type), treatment conditions (test conditions), predicate device selection (as appropriate, control article), study timing (Day 0 acute to more than Day 90 chronic survival studies), and methods of tissue analysis (tissue dissection - staining methods). These considerations are discussed and illustrated using the fresh extirpated porcine longissimus muscle model for endometrial ablation.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
Nie, Zhi; Vairavan, Srinivasan; Narayan, Vaibhav A; Ye, Jieping; Li, Qingqin S
2018-01-01
Identification of risk factors of treatment resistance may be useful to guide treatment selection, avoid inefficient trial-and-error, and improve major depressive disorder (MDD) care. We extended the work in predictive modeling of treatment resistant depression (TRD) via partition of the data from the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) cohort into a training and a testing dataset. We also included data from a small yet completely independent cohort RIS-INT-93 as an external test dataset. We used features from enrollment and level 1 treatment (up to week 2 response only) of STAR*D to explore the feature space comprehensively and applied machine learning methods to model TRD outcome at level 2. For TRD defined using QIDS-C16 remission criteria, multiple machine learning models were internally cross-validated in the STAR*D training dataset and externally validated in both the STAR*D testing dataset and RIS-INT-93 independent dataset with an area under the receiver operating characteristic curve (AUC) of 0.70-0.78 and 0.72-0.77, respectively. The upper bound for the AUC achievable with the full set of features could be as high as 0.78 in the STAR*D testing dataset. Model developed using top 30 features identified using feature selection technique (k-means clustering followed by χ2 test) achieved an AUC of 0.77 in the STAR*D testing dataset. In addition, the model developed using overlapping features between STAR*D and RIS-INT-93, achieved an AUC of > 0.70 in both the STAR*D testing and RIS-INT-93 datasets. Among all the features explored in STAR*D and RIS-INT-93 datasets, the most important feature was early or initial treatment response or symptom severity at week 2. These results indicate that prediction of TRD prior to undergoing a second round of antidepressant treatment could be feasible even in the absence of biomarker data.
A discriminant function model for admission at undergraduate university level
NASA Astrophysics Data System (ADS)
Ali, Hamdi F.; Charbaji, Abdulrazzak; Hajj, Nada Kassim
1992-09-01
The study is aimed at predicting objective criteria based on a statistically tested model for admitting undergraduate students to Beirut University College. The University is faced with a dual problem of having to select only a fraction of an increasing number of applicants, and of trying to minimize the number of students placed on academic probation (currently 36 percent of new admissions). Out of 659 new students, a sample of 272 students (45 percent) were selected; these were all the students on the Dean's list and on academic probation. With academic performance as the dependent variable, the model included ten independent variables and their interactions. These variables included the type of high school, the language of instruction in high school, recommendations, sex, academic average in high school, score on the English Entrance Examination, the major in high school, and whether the major was originally applied for by the student. Discriminant analysis was used to evaluate the relative weight of the independent variables, and from the analysis three equations were developed, one for each academic division in the College. The predictive power of these equations was tested by using them to classify students not in the selected sample into successful and unsuccessful ones. Applicability of the model to other institutions of higher learning is discussed.
Item Response Models for Examinee-Selected Items
ERIC Educational Resources Information Center
Wang, Wen-Chung; Jin, Kuan-Yu; Qiu, Xue-Lan; Wang, Lei
2012-01-01
In some tests, examinees are required to choose a fixed number of items from a set of given items to answer. This practice creates a challenge to standard item response models, because more capable examinees may have an advantage by making wiser choices. In this study, we developed a new class of item response models to account for the choice…
University Macro Analytic Simulation Model.
ERIC Educational Resources Information Center
Baron, Robert; Gulko, Warren
The University Macro Analytic Simulation System (UMASS) has been designed as a forecasting tool to help university administrators budgeting decisions. Alternative budgeting strategies can be tested on a computer model and then an operational alternative can be selected on the basis of the most desirable projected outcome. UMASS uses readily…
Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv
2017-02-01
Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.
Soft sensor for real-time cement fineness estimation.
Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir
2015-03-01
This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Parameter estimation and order selection for an empirical model of VO2 on-kinetics.
Alata, O; Bernard, O
2007-04-27
In humans, VO2 on-kinetics are noisy numerical signals that reflect the pulmonary oxygen exchange kinetics at the onset of exercise. They are empirically modelled as a sum of an offset and delayed exponentials. The number of delayed exponentials; i.e. the order of the model, is commonly supposed to be 1 for low-intensity exercises and 2 for high-intensity exercises. As no ground truth has ever been provided to validate these postulates, physiologists still need statistical methods to verify their hypothesis about the number of exponentials of the VO2 on-kinetics especially in the case of high-intensity exercises. Our objectives are first to develop accurate methods for estimating the parameters of the model at a fixed order, and then, to propose statistical tests for selecting the appropriate order. In this paper, we provide, on simulated Data, performances of Simulated Annealing for estimating model parameters and performances of Information Criteria for selecting the order. These simulated Data are generated with both single-exponential and double-exponential models, and noised by white and Gaussian noise. The performances are given at various Signal to Noise Ratio (SNR). Considering parameter estimation, results show that the confidences of estimated parameters are improved by increasing the SNR of the response to be fitted. Considering model selection, results show that Information Criteria are adapted statistical criteria to select the number of exponentials.
NASA Astrophysics Data System (ADS)
Khai Tiu, Ervin Shan; Huang, Yuk Feng; Ling, Lloyd
2018-03-01
An accurate streamflow forecasting model is important for the development of flood mitigation plan as to ensure sustainable development for a river basin. This study adopted Variational Mode Decomposition (VMD) data-preprocessing technique to process and denoise the rainfall data before putting into the Support Vector Machine (SVM) streamflow forecasting model in order to improve the performance of the selected model. Rainfall data and river water level data for the period of 1996-2016 were used for this purpose. Homogeneity tests (Standard Normal Homogeneity Test, the Buishand Range Test, the Pettitt Test and the Von Neumann Ratio Test) and normality tests (Shapiro-Wilk Test, Anderson-Darling Test, Lilliefors Test and Jarque-Bera Test) had been carried out on the rainfall series. Homogenous and non-normally distributed data were found in all the stations, respectively. From the recorded rainfall data, it was observed that Dungun River Basin possessed higher monthly rainfall from November to February, which was during the Northeast Monsoon. Thus, the monthly and seasonal rainfall series of this monsoon would be the main focus for this research as floods usually happen during the Northeast Monsoon period. The predicted water levels from SVM model were assessed with the observed water level using non-parametric statistical tests (Biased Method, Kendall's Tau B Test and Spearman's Rho Test).
Patterson, Olga V; Forbush, Tyler B; Saini, Sameer D; Moser, Stephanie E; DuVall, Scott L
2015-01-01
In order to measure the level of utilization of colonoscopy procedures, identifying the primary indication for the procedure is required. Colonoscopies may be utilized not only for screening, but also for diagnostic or therapeutic purposes. To determine whether a colonoscopy was performed for screening, we created a natural language processing system to identify colonoscopy reports in the electronic medical record system and extract indications for the procedure. A rule-based model and three machine-learning models were created using 2,000 manually annotated clinical notes of patients cared for in the Department of Veterans Affairs. Performance of the models was measured and compared. Analysis of the models on a test set of 1,000 documents indicates that the rule-based system performance stays fairly constant as evaluated on training and testing sets. However, the machine learning model without feature selection showed significant decrease in performance. Therefore, rule-based classification system appears to be more robust than a machine-learning system in cases when no feature selection is performed.
Ensemble habitat mapping of invasive plant species
Stohlgren, T.J.; Ma, P.; Kumar, S.; Rocca, M.; Morisette, J.T.; Jarnevich, C.S.; Benson, N.
2010-01-01
Ensemble species distribution models combine the strengths of several species environmental matching models, while minimizing the weakness of any one model. Ensemble models may be particularly useful in risk analysis of recently arrived, harmful invasive species because species may not yet have spread to all suitable habitats, leaving species-environment relationships difficult to determine. We tested five individual models (logistic regression, boosted regression trees, random forest, multivariate adaptive regression splines (MARS), and maximum entropy model or Maxent) and ensemble modeling for selected nonnative plant species in Yellowstone and Grand Teton National Parks, Wyoming; Sequoia and Kings Canyon National Parks, California, and areas of interior Alaska. The models are based on field data provided by the park staffs, combined with topographic, climatic, and vegetation predictors derived from satellite data. For the four invasive plant species tested, ensemble models were the only models that ranked in the top three models for both field validation and test data. Ensemble models may be more robust than individual species-environment matching models for risk analysis. ?? 2010 Society for Risk Analysis.
Scattering Properties of Large Irregular Cosmic Dust Particles at Visible Wavelengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escobar-Cerezo, J.; Palmer, C.; Muñoz, O.
The effect of internal inhomogeneities and surface roughness on the scattering behavior of large cosmic dust particles is studied by comparing model simulations with laboratory measurements. The present work shows the results of an attempt to model a dust sample measured in the laboratory with simulations performed by a ray-optics model code. We consider this dust sample as a good analogue for interplanetary and interstellar dust as it shares its refractive index with known materials in these media. Several sensitivity tests have been performed for both structural cases (internal inclusions and surface roughness). Three different samples have been selected tomore » mimic inclusion/coating inhomogeneities: two measured scattering matrices of hematite and white clay, and a simulated matrix for water ice. These three matrices are selected to cover a wide range of imaginary refractive indices. The selection of these materials also seeks to study astrophysical environments of interest such as Mars, where hematite and clays have been detected, and comets. Based on the results of the sensitivity tests shown in this work, we perform calculations for a size distribution of a silicate-type host particle model with inclusions and surface roughness to reproduce the experimental measurements of a dust sample. The model fits the measurements quite well, proving that surface roughness and internal structure play a role in the scattering pattern of irregular cosmic dust particles.« less
Moving Model Test of High-Speed Train Aerodynamic Drag Based on Stagnation Pressure Measurements
Yang, Mingzhi; Du, Juntao; Huang, Sha; Zhou, Dan
2017-01-01
A moving model test method based on stagnation pressure measurements is proposed to measure the train aerodynamic drag coefficient. Because the front tip of a high-speed train has a high pressure area and because a stagnation point occurs in the center of this region, the pressure of the stagnation point is equal to the dynamic pressure of the sensor tube based on the obtained train velocity. The first derivation of the train velocity is taken to calculate the acceleration of the train model ejected by the moving model system without additional power. According to Newton’s second law, the aerodynamic drag coefficient can be resolved through many tests at different train speeds selected within a relatively narrow range. Comparisons are conducted with wind tunnel tests and numerical simulations, and good agreement is obtained, with differences of less than 6.1%. Therefore, the moving model test method proposed in this paper is feasible and reliable. PMID:28095441
NASA Technical Reports Server (NTRS)
Berry, R. L.; Tegart, J. R.; Demchak, L. J.
1979-01-01
Thirty sets of test data selected from the 89 low-g aircraft tests flown by NASA KC-135 zero-g aircraft are listed in tables with their accompanying test conditions. The data for each test consists of the time history plots of digitalized data (in engineering units) and the time history plots of the load cell data transformed to the tank axis system. The transformed load cell data was developed for future analytical comparisons; therefore, these data were transformed and plotted from the time at which the aircraft Z axis acceleration passed through l-g. There are 14 time history plots per test condition. The contents of each plot is shown in a table.
An Adaptive Genetic Association Test Using Double Kernel Machines
Zhan, Xiang; Epstein, Michael P.; Ghosh, Debashis
2014-01-01
Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study. PMID:26640602
Ni, W; Song, X; Cui, J
2014-03-01
The purpose of this study was to test the mutant selection window (MSW) hypothesis with Escherichia coli exposed to levofloxacin in a rabbit model and to compare in vivo and in vitro exposure thresholds that restrict the selection of fluoroquinolone-resistant mutants. Local infection with E. coli was established in rabbits, and the infected animals were treated orally with various doses of levofloxacin once a day for five consecutive days. Changes in levofloxacin concentration and levofloxacin susceptibility were monitored at the site of infection. The MICs of E. coli increased when levofloxacin concentrations at the site of infection fluctuated between the lower and upper boundaries of the MSW, defined in vitro as the minimum inhibitory concentration (MIC99) and the mutant prevention concentration (MPC), respectively. The pharmacodynamic thresholds at which resistant mutants are not selected in vivo was estimated as AUC24/MPC > 20 h or AUC24/MIC > 60 h, where AUC24 is the area under the drug concentration time curve in a 24-h interval. Our finding demonstrated that the MSW existed in vivo. The AUC24/MPC ratio that prevented resistant mutants from being selected estimated in vivo is consistent with that observed in vitro, indicating it might be a reliable index for guiding the optimization of antimicrobial treatment regimens for suppression of the selection of antimicrobial resistance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Bret, E-mail: jackson@chem.umass.edu; Nattino, Francesco; Kroes, Geert-Jan
The dissociative chemisorption of methane on metal surfaces is of great practical and fundamental importance. Not only is it the rate-limiting step in the steam reforming of natural gas, the reaction exhibits interesting mode-selective behavior and a strong dependence on the temperature of the metal. We present a quantum model for this reaction on Ni(100) and Ni(111) surfaces based on the reaction path Hamiltonian. The dissociative sticking probabilities computed using this model agree well with available experimental data with regard to variation with incident energy, substrate temperature, and the vibrational state of the incident molecule. We significantly expand the vibrationalmore » basis set relative to earlier studies, which allows reaction probabilities to be calculated for doubly excited initial vibrational states, though it does not lead to appreciable changes in the reaction probabilities for singly excited initial states. Sudden models used to treat the center of mass motion parallel to the surface are compared with results from ab initio molecular dynamics and found to be reasonable. Similar comparisons for molecular rotation suggest that our rotationally adiabatic model is incorrect, and that sudden behavior is closer to reality. Such a model is proposed and tested. A model for predicting mode-selective behavior is tested, with mixed results, though we find it is consistent with experimental studies of normal vs. total (kinetic) energy scaling. Models for energy transfer into lattice vibrations are also examined.« less
Test Data Analysis of a Spray Bar Zero-Gravity Liquid Hydrogen Vent System for Upper Stages
NASA Technical Reports Server (NTRS)
Hedayat, A.; Bailey, J. W.; Hastings, L. J.; Flachbart, R. H.
2003-01-01
To support development of a zero-gravity pressure control capability for liquid hydrogen (LH2), a series of thermodynamic venting system (TVS) tests was conducted in 1996 and 1998 using the Marshall Space Flight Center (MSFC) multipurpose hydrogen test bed (MHTB). These tests were performed with ambient heat leaks =20 and 50 W for tank fill levels of 90%, 50%, and 25%. TVS performance testing revealed that the spray bar was highly effective in providing tank pressure control within a 7-kPa band (131-138 Wa), and complete destratification of the liquid and the ullage was achieved with all test conditions. Seven of the MHTB tests were correlated with the TVS performance analytical model. The tests were selected to encompass the range of tank fill levels, ambient heat leaks, operational modes, and ullage pressurants. The TVS model predicted ullage pressure and temperature and bulk liquid saturation pressure and temperature obtained from the TVS model were compared with the test data. During extended self-pressurization periods, following tank lockup, the model predicted faster pressure rise rates than were measured. However, once the system entered the cyclic mixing/venting operational mode, the modeled and measured data were quite similar.
A DMAP Program for the Selection of Accelerometer Locations in MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Peck, Jeff; Torres, Isaias
2004-01-01
A new program for selecting sensor locations has been written in the DMAP (Direct Matrix Abstraction Program) language of MSC/NASTRAN. The program implements the method of Effective Independence for selecting sensor locations, and is executed within a single NASTRAN analysis as a "rigid format alter" to the normal modes solution sequence (SOL 103). The user of the program is able to choose among various analysis options using Case Control and Bulk Data entries. Algorithms tailored for the placement of both uni-axial and tri- axial accelerometers are available, as well as several options for including the model s mass distribution into the calculations. Target modes for the Effective Independence analysis are selected from the MSC/NASTRAN ASET modes calculated by the "SOL 103" solution sequence. The initial candidate sensor set is also under user control, and is selected from the ASET degrees of freedom. Analysis results are printed to the MSCINASTRAN output file (*.f06), and may include the current candidate sensors set, and their associated Effective Independence distribution, at user specified iteration intervals. At the conclusion of the analysis, the model is reduced to the final sensor set, and frequencies and orthogonality checks are printed. Example results are given for a pre-test analysis of NASA s five-segment solid rocket booster modal test.
Race-Related Cognitive Test Bias in the ACTIVE Study: A MIMIC Model Approach
Aiken Morgan, Adrienne T.; Marsiske, Michael; Dzierzewski, Joseph; Jones, Richard N.; Whitfield, Keith E.; Johnson, Kathy E.; Cresci, Mary K.
2010-01-01
The present study investigated evidence for race-related test bias in cognitive measures used in the baseline assessment of the ACTIVE clinical trial. Test bias against African Americans has been documented in both cognitive aging and early lifespan studies. Despite significant mean performance differences, Multiple Indicators Multiple Causes (MIMIC) models suggested most differences were at the construct level. There was little evidence that specific measures put either group at particular advantage or disadvantage and little evidence of cognitive test bias in this sample. Small group differences in education, cognitive status, and health suggest positive selection may have attenuated possible biases. PMID:20845121
Development of an inflatable radiator system. [for space shuttles
NASA Technical Reports Server (NTRS)
Leach, J. W.
1976-01-01
Conceptual designs of an inflatable radiator system developed for supplying short duration supplementary cooling of space vehicles are described along with parametric trade studies, materials evaluation/selection studies, thermal and structural analyses, and numerous element tests. Fabrication techniques developed in constructing the engineering models and performance data from the model thermal vacuum tests are included. Application of these data to refining the designs of the flight articles and to constructing a full scale prototype radiator is discussed.
Miller, Robert T.; Delin, G.N.
1994-01-01
A three-dimensional, anisotropic, nonisothermal, ground-water-flow, and thermal-energy-transport model was constructed to simulate the four short-term test cycles. The model was used to simulate the entire short-term testing period of approximately 400 days. The only model properties varied during model calibration were longitudinal and transverse thermal dispersivities, which, for final calibration, were simulated as 3.3 and 0.33 meters, respectively. The model was calibrated by comparing model-computed results to (1) measured temperatures at selected altitudes in four observation wells, (2) measured temperatures at the production well, and (3) calculated thermal efficiencies of the aquifer. Model-computed withdrawal-water temperatures were within an average of about 3 percent of measured values and model-computed aquifer-thermal efficiencies were within an average of about 5 percent of calculated values for the short-term test cycles. These data indicate that the model accurately simulated thermal-energy storage within the Franconia-Ironton-Galesville aquifer.
Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling
NASA Astrophysics Data System (ADS)
Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.
2017-04-01
Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.
Comparison of the CEAS and Williams-type barley yield models for North Dakota and Minnesota
NASA Technical Reports Server (NTRS)
Leduc, S. (Principal Investigator)
1982-01-01
The CEAS and Williams type models were compared based on specified selection criteria which includes a ten year bootstrap test (1970-1979). Based on this, the models were quite comparable; however, the CEAS model was slightly better overall. The Williams type model seemed better for the 1974 estimates. Because that year spring wheat yield was particularly low, the Williams type model should not be excluded from further consideration.
Azad Henareh Khalyani; William A. Gould; Eric Harmsen; Adam Terando; Maya Quinones; Jaime A. Collazo
2016-01-01
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
A survey of variable selection methods in two Chinese epidemiology journals
2010-01-01
Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163) via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44%) used stepwise procedures, 89 (41%) tested individual regression coefficients, but 33 (15%) did not mention how variables were selected. Sixty percent (58/97) of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals. PMID:20920252
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow
2017-01-01
Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.
The Modular Modeling System (MMS): User's Manual
Leavesley, G.H.; Restrepo, Pedro J.; Markstrom, S.L.; Dixon, M.; Stannard, L.G.
1996-01-01
The Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide the research and operational framework needed to support development, testing, and evaluation of physical-process algorithms and to facilitate integration of user-selected sets of algorithms into operational physical-process models. MMS uses a module library that contains modules for simulating a variety of water, energy, and biogeochemical processes. A model is created by selectively coupling the most appropriate modules from the library to create a 'suitable' model for the desired application. Where existing modules do not provide appropriate process algorithms, new modules can be developed. The MMS user's manual provides installation instructions and a detailed discussion of system concepts, module development, and model development and application using the MMS graphical user interface.
Archer, C Ruth; Hunt, John
2015-11-01
Aging evolved because the strength of natural selection declines over the lifetime of most organisms. Weak natural selection late in life allows the accumulation of deleterious mutations and may favor alleles that have positive effects on fitness early in life, but costly pleiotropic effects expressed later on. While this decline in natural selection is central to longstanding evolutionary explanations for aging, a role for sexual selection and sexual conflict in the evolution of lifespan and aging has only been identified recently. Testing how sexual selection and sexual conflict affect lifespan and aging is challenging as it requires quantifying male age-dependent reproductive success. This is difficult in the invertebrate model organisms traditionally used in aging research. Research using crickets (Orthoptera: Gryllidae), where reproductive investment can be easily measured in both sexes, has offered exciting and novel insights into how sexual selection and sexual conflict affect the evolution of aging, both in the laboratory and in the wild. Here we discuss how sexual selection and sexual conflict can be integrated alongside evolutionary and mechanistic theories of aging using crickets as a model. We then highlight the potential for research using crickets to further advance our understanding of lifespan and aging. Copyright © 2015 Elsevier Inc. All rights reserved.
Targeted training of the decision rule benefits rule-guided behavior in Parkinson's disease.
Ell, Shawn W
2013-12-01
The impact of Parkinson's disease (PD) on rule-guided behavior has received considerable attention in cognitive neuroscience. The majority of research has used PD as a model of dysfunction in frontostriatal networks, but very few attempts have been made to investigate the possibility of adapting common experimental techniques in an effort to identify the conditions that are most likely to facilitate successful performance. The present study investigated a targeted training paradigm designed to facilitate rule learning and application using rule-based categorization as a model task. Participants received targeted training in which there was no selective-attention demand (i.e., stimuli varied along a single, relevant dimension) or nontargeted training in which there was selective-attention demand (i.e., stimuli varied along a relevant dimension as well as an irrelevant dimension). Following training, all participants were tested on a rule-based task with selective-attention demand. During the test phase, PD patients who received targeted training performed similarly to control participants and outperformed patients who did not receive targeted training. As a preliminary test of the generalizability of the benefit of targeted training, a subset of the PD patients were tested on the Wisconsin card sorting task (WCST). PD patients who received targeted training outperformed PD patients who did not receive targeted training on several WCST performance measures. These data further characterize the contribution of frontostriatal circuitry to rule-guided behavior. Importantly, these data also suggest that PD patient impairment, on selective-attention-demanding tasks of rule-guided behavior, is not inevitable and highlight the potential benefit of targeted training.
Test Program for Assessing Vulnerability of Industrial Equipment to Nuclear Air Blast.
1983-10-01
PROJECT. TASK 4Scientific Servic, Inc. AREA & WORK UNIT NUMBERS 517 East Bayshore Work Unit 1124F Redwood City, CA 94063___ __________ 11. CONTROLLING ...vulnerability, but perhaps less expensive, to be selected and substituted, with an eye to cost control . 5. MODELING AND SCALING CONSIDERATIONS Reiterating...behavior and properties of the test items and Interfaces that control behavior (e4g., test objects/flow field, test objects/interfacing surface of
Noise level measurements on the UMTA Mark I Diagnostic Car (R42 MODEL)
DOT National Transportation Integrated Search
1971-10-01
The R42 Model mass transit car currently operating on the "N" line of the new York City Transit System was selected for experimentation and tests. For this purpose, the car was instrumented and designated as the UMTA Mark I Diagnostic Car. Noise leve...
A Theory-Driven Model of Community College Student Engagement
ERIC Educational Resources Information Center
Schuetz, Pam
2008-01-01
This mixed-methods study develops, operationalizes, and tests a new conceptual model of community college student engagement. Themes emerging from participant observations and semistructured interviews with 30 adult students enrolled at a Large Best Practices Community College (LBPCC) over the 2005-2006 academic year are used to guide selection of…
Turbulence Control Through Selective Surface Heating Using Microwave Radiation
2013-05-01
models. This type of plasma actuators needs further development to follow aerodynamic requirements of wind -tunnel experiments. 5. Ring -type plasma...modes of MW-heated elements in the aerodynamic experiment. Design of a resistive vibrator array for the airfoil model to be tested in a wind tunnel...
Overemphasis on Perfectly Competitive Markets in Microeconomics Principles Textbooks
ERIC Educational Resources Information Center
Hill, Roderick; Myatt, Anthony
2007-01-01
Microeconomic principles courses focus on perfectly competitive markets far more than other market structures. The authors examine five possible reasons for this but find none of them sufficiently compelling. They conclude that textbook authors should place more emphasis on how economists select appropriate models and test models' predictions…
A developmental model of recreation choice behavior
Daniel R. Williams
1985-01-01
Recreation choices are viewed as including, at least implicitly, a selection of an activity, a setting, and a set of companions. With development these three elements become increasingly differentiated from one another. The model is tested by examining the perceived similarities among a set of 15 recreation choices depicted in color slides.
Aerodynamic stability analysis of NASA J85-13/planar pressure pulse generator installation
NASA Technical Reports Server (NTRS)
Chung, K.; Hosny, W. M.; Steenken, W. G.
1980-01-01
A digital computer simulation model for the J85-13/Planar Pressure Pulse Generator (P3 G) test installation was developed by modifying an existing General Electric compression system model. This modification included the incorporation of a novel method for describing the unsteady blade lift force. This approach significantly enhanced the capability of the model to handle unsteady flows. In addition, the frequency response characteristics of the J85-13/P3G test installation were analyzed in support of selecting instrumentation locations to avoid standing wave nodes within the test apparatus and thus, low signal levels. The feasibility of employing explicit analytical expression for surge prediction was also studied.
Van Geert, Eline; Orhon, Altan; Cioca, Iulia A; Mamede, Rui; Golušin, Slobodan; Hubená, Barbora; Morillo, Daniel
2016-01-01
Self-report personality questionnaires, traditionally offered in a graded-scale format, are widely used in high-stakes contexts such as job selection. However, job applicants may intentionally distort their answers when filling in these questionnaires, undermining the validity of the test results. Forced-choice questionnaires are allegedly more resistant to intentional distortion compared to graded-scale questionnaires, but they generate ipsative data. Ipsativity violates the assumptions of classical test theory, distorting the reliability and construct validity of the scales, and producing interdependencies among the scores. This limitation is overcome in the current study by using the recently developed Thurstonian item response theory model. As online testing in job selection contexts is increasing, the focus will be on the impact of intentional distortion on personality questionnaire data collected online. The present study intends to examine the effect of three different variables on intentional distortion: (a) test format (graded-scale versus forced-choice); (b) culture, as data will be collected in three countries differing in their attitudes toward intentional distortion (the United Kingdom, Serbia, and Turkey); and (c) cognitive ability, as a possible predictor of the ability to choose the more desirable responses. Furthermore, we aim to integrate the findings using a comprehensive model of intentional distortion. In the Anticipated Results section, three main aspects are considered: (a) the limitations of the manipulation, theoretical approach, and analyses employed; (b) practical implications for job selection and for personality assessment in a broader sense; and (c) suggestions for further research.
Investigation of aeroelastic stability phenomena of a helicopter by in-flight shake test
NASA Technical Reports Server (NTRS)
Miao, W. L.; Edwards, T.; Brandt, D. E.
1976-01-01
The analytical capability of the helicopter stability program is discussed. The parameters which are found to be critical to the air resonance characteristics of the soft in-plane hingeless rotor systems are detailed. A summary of two model test programs, a 1/13.8 Froude-scaled BO-105 model and a 1.67 meter (5.5 foot) diameter Froude-scaled YUH-61A model, are presented with emphasis on the selection of the final parameters which were incorporated in the full scale YUH-61A helicopter. Model test data for this configuration are shown. The actual test results of the YUH-61A air resonance in-flight shake test stability are presented. Included are a concise description of the test setup, which employs the Grumman Automated Telemetry System (ATS), the test technique for recording in-flight stability, and the test procedure used to demonstrate favorable stability characteristics with no in-plane damping augmentation (lag damper removed). The data illustrating the stability trend of air resonance with forward speed and the stability trend of ground resonance for percent airborne are presented.
Asymmetric information in health insurance: evidence from the National Medical Expenditure Survey.
Cardon, J H; Hendel, I
2001-01-01
Adverse selection is perceived to be a major source of market failure in insurance markets. There is little empirical evidence on the extent of the problem. We estimate a structural model of health insurance and health care choices using data on single individuals from the NMES. A robust prediction of adverse-selection models is that riskier types buy more coverage and, on average, end up using more care. We test for unobservables linking health insurance status and health care consumption. We find no evidence of informational asymmetries.
Lindberg, Ann-Sofie; Oksa, Juha; Antti, Henrik; Malm, Christer
2015-01-01
Physical capacity has previously been deemed important for firefighters physical work capacity, and aerobic fitness, muscular strength, and muscular endurance are the most frequently investigated parameters of importance. Traditionally, bivariate and multivariate linear regression statistics have been used to study relationships between physical capacities and work capacities among firefighters. An alternative way to handle datasets consisting of numerous correlated variables is to use multivariate projection analyses, such as Orthogonal Projection to Latent Structures. The first aim of the present study was to evaluate the prediction and predictive power of field and laboratory tests, respectively, on firefighters' physical work capacity on selected work tasks. Also, to study if valid predictions could be achieved without anthropometric data. The second aim was to externally validate selected models. The third aim was to validate selected models on firefighters' and on civilians'. A total of 38 (26 men and 12 women) + 90 (38 men and 52 women) subjects were included in the models and the external validation, respectively. The best prediction (R2) and predictive power (Q2) of Stairs, Pulling, Demolition, Terrain, and Rescue work capacities included field tests (R2 = 0.73 to 0.84, Q2 = 0.68 to 0.82). The best external validation was for Stairs work capacity (R2 = 0.80) and worst for Demolition work capacity (R2 = 0.40). In conclusion, field and laboratory tests could equally well predict physical work capacities for firefighting work tasks, and models excluding anthropometric data were valid. The predictive power was satisfactory for all included work tasks except Demolition.
Inferring the Mode of Selection from the Transient Response to Demographic Perturbations
NASA Astrophysics Data System (ADS)
Balick, Daniel; Do, Ron; Reich, David; Sunyaev, Shamil
2014-03-01
Despite substantial recent progress in theoretical population genetics, most models work under the assumption of a constant population size. Deviations from fixed population sizes are ubiquitous in natural populations, many of which experience population bottlenecks and re-expansions. The non-equilibrium dynamics introduced by a large perturbation in population size are generally viewed as a confounding factor. In the present work, we take advantage of the transient response to a population bottleneck to infer features of the mode of selection and the distribution of selective effects. We develop an analytic framework and a corresponding statistical test that qualitatively differentiates between alleles under additive and those under recessive or more general epistatic selection. This statistic can be used to bound the joint distribution of selective effects and dominance effects in any diploid sexual organism. We apply this technique to human population genetic data, and severely restrict the space of allowed selective coefficients in humans. Additionally, one can test a set of functionally or medically relevant alleles for the primary mode of selection, or determine the local regional variation in dominance coefficients along the genome.
Le Galliard, J-F; Paquet, M; Mugabo, M
2015-05-01
Temperament traits are seen in many animal species, and recent evolutionary models predict that they could be maintained by heterogeneous selection. We tested this prediction by examining density-dependent selection in juvenile common lizards Zootoca vivipara scored for activity, boldness and sociability at birth and at the age of 1 year. We measured three key life-history traits (juvenile survival, body growth rate and reproduction) and quantified selection in experimental populations at five density levels ranging from low to high values. We observed consistent individual differences for all behaviours on the short term, but only for activity and one boldness measure across the first year of life. At low density, growth selection favoured more sociable lizards, whereas viability selection favoured less active individuals. A significant negative correlational selection on activity and boldness existed for body growth rate irrespective of density. Thus, behavioural traits were characterized by limited ontogenic consistency, and natural selection was heterogeneous between density treatments and fitness traits. This confirms that density-dependent selection plays an important role in the maintenance of individual differences in exploration-activity and sociability. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
Statistical modelling of growth using a mixed model with orthogonal polynomials.
Suchocki, T; Szyda, J
2011-02-01
In statistical modelling, the effects of single-nucleotide polymorphisms (SNPs) are often regarded as time-independent. However, for traits recorded repeatedly, it is very interesting to investigate the behaviour of gene effects over time. In the analysis, simulated data from the 13th QTL-MAS Workshop (Wageningen, The Netherlands, April 2009) was used and the major goal was the modelling of genetic effects as time-dependent. For this purpose, a mixed model which describes each effect using the third-order Legendre orthogonal polynomials, in order to account for the correlation between consecutive measurements, is fitted. In this model, SNPs are modelled as fixed, while the environment is modelled as random effects. The maximum likelihood estimates of model parameters are obtained by the expectation-maximisation (EM) algorithm and the significance of the additive SNP effects is based on the likelihood ratio test, with p-values corrected for multiple testing. For each significant SNP, the percentage of the total variance contributed by this SNP is calculated. Moreover, by using a model which simultaneously incorporates effects of all of the SNPs, the prediction of future yields is conducted. As a result, 179 from the total of 453 SNPs covering 16 out of 18 true quantitative trait loci (QTL) were selected. The correlation between predicted and true breeding values was 0.73 for the data set with all SNPs and 0.84 for the data set with selected SNPs. In conclusion, we showed that a longitudinal approach allows for estimating changes of the variance contributed by each SNP over time and demonstrated that, for prediction, the pre-selection of SNPs plays an important role.
NASA Astrophysics Data System (ADS)
Shi, Ming F.; Zhang, Li; Zhu, Xinhai
2016-08-01
The Yoshida nonlinear isotropic/kinematic hardening material model is often selected in forming simulations where an accurate springback prediction is required. Many successful application cases in the industrial scale automotive components using advanced high strength steels (AHSS) have been reported to give better springback predictions. Several issues have been raised recently in the use of the model for higher strength AHSS including the use of two C vs. one C material parameters in the Armstrong and Frederick model (AF model), the original Yoshida model vs. Original Yoshida model with modified hardening law, and constant Young's Modulus vs. decayed Young's Modulus as a function of plastic strain. In this paper, an industrial scale automotive component using 980 MPa strength materials is selected to study the effect of two C and one C material parameters in the AF model on both forming and springback prediction using the Yoshida model with and without the modified hardening law. The effect of decayed Young's Modulus on the springback prediction for AHSS is also evaluated. In addition, the limitations of the material parameters determined from tension and compression tests without multiple cycle tests are also discussed for components undergoing several bending and unbending deformations.
Brauchli, Rebecca; Jenny, Gregor J; Füllemann, Désirée; Bauer, Georg F
2015-01-01
Studies using the Job Demands-Resources (JD-R) model commonly have a heterogeneous focus concerning the variables they investigate-selective job demands and resources as well as burnout and work engagement. The present study applies the rationale of the JD-R model to expand the relevant outcomes of job demands and job resources by linking the JD-R model to the logic of a generic health development framework predicting more broadly positive and negative health. The resulting JD-R health model was operationalized and tested with a generalizable set of job characteristics and positive and negative health outcomes among a heterogeneous sample of 2,159 employees. Applying a theory-driven and a data-driven approach, measures which were generally relevant for all employees were selected. Results from structural equation modeling indicated that the model fitted the data. Multiple group analyses indicated invariance across six organizations, gender, job positions, and three times of measurement. Initial evidence was found for the validity of an expanded JD-R health model. Thereby this study contributes to the current research on job characteristics and health by combining the core idea of the JD-R model with the broader concepts of salutogenic and pathogenic health development processes as well as both positive and negative health outcomes.
ERIC Educational Resources Information Center
Watts, Sarah E.; Weems, Carl F.
2006-01-01
The purpose of this study was to examine the linkages among selective attention, memory bias, cognitive errors, and anxiety problems by testing a model of the interrelations among these cognitive variables and childhood anxiety disorder symptoms. A community sample of 81 youth (38 females and 43 males) aged 9-17 years and their parents completed…
ERIC Educational Resources Information Center
Smyth, Frederick L.; McArdle, John J.
2004-01-01
Using Bowen and Bok's data from 23 selective colleges, we fit multilevel logit models to test two hypotheses with implications for affirmative action and group differences in attainment of science, math, or engineering (SME) degrees. Hypothesis 1, that differences in precollege academic preparation will explain later SME graduation disparities,…
ERIC Educational Resources Information Center
Goodyear, Rodney K.; Newcomb, Micheal D.; Locke, Thomas F.
2002-01-01
Data from a community sample of 493 pregnant Latina teenagers were used to test a mediated model of mate selection with 5 classes of variables: (a) male partner characteristics (antisocial behaviors, negative relationships with women, harm risk, and relationship length), (b) young women's psychosocial variables (antisocial behaviors, drug use,…
Georgia Vocational Student Assessment Project. Final Report.
ERIC Educational Resources Information Center
Vocational Technical Education Consortium of States, Atlanta, GA.
A project was conducted to develop vocational education tests for use in Georgia secondary schools, specifically for welding, machine shop, and sheet metal courses. The project team developed an outline of an assessment model that included the following components: (1) select a program for use in developing test items; (2) verify duties, tasks,…
Comprehensive Adult Student Assessment Systems Braille Reading Assessment: An Exploratory Study
ERIC Educational Resources Information Center
Posey, Virginia K.; Henderson, Barbara W.
2012-01-01
Introduction: This exploratory study determined whether transcribing selected test items on an adult life and work skills reading test into braille could maintain the same approximate scale-score range and maintain fitness within the item response theory model as used by the Comprehensive Adult Student Assessment Systems (CASAS) for developing…
Effects of Presentation Mode and Computer Familiarity on Summarization of Extended Texts
ERIC Educational Resources Information Center
Yu, Guoxing
2010-01-01
Comparability studies on computer- and paper-based reading tests have focused on short texts and selected-response items via almost exclusively statistical modeling of test performance. The psychological effects of presentation mode and computer familiarity on individual students are under-researched. In this study, 157 students read extended…
Advances in the Application of Decision Theory to Test-Based Decision Making.
ERIC Educational Resources Information Center
van der Linden, Wim J.
This paper reviews recent research in the Netherlands on the application of decision theory to test-based decision making about personnel selection and student placement. The review is based on an earlier model proposed for the classification of decision problems, and emphasizes an empirical Bayesian framework. Classification decisions with…
Code of Federal Regulations, 2011 CFR
2011-07-01
..., and for 1985 and Later Model Year New Gasoline Fueled, Natural Gas-Fueled, Liquefied Petroleum Gas... with gasoline-fueled or methanol-fueled engines only. The Administrator does not approve the test... development and application of the requisite technology, giving appropriate consideration to the cost of...
Misconceptions of Selected Science Concepts Held by Elementary School Students
ERIC Educational Resources Information Center
Doran, Rodney L.
1972-01-01
Describes a test, administered as a motion picture, designed to measure misconceptions about the particle model of matter held by students in grades two through six. Reliability values for tests of eight misconceptions are given and the correlations of misconception scores with measures of IQ, reading, mathematics, and science ability reported.…
40 CFR 1037.525 - Special procedures for testing hybrid vehicles with power take-off.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of this section to allow testing hybrid vehicles other than electric-battery hybrids, consistent with... model, use good engineering judgment to select the vehicle type with the maximum number of PTO circuits... as needed to stabilize the battery at a full state of charge. For electric hybrid vehicles, we...
Gifted and Talented Education: A National Test Case in Peoria.
ERIC Educational Resources Information Center
Fetterman, David M.
1986-01-01
This article presents a study of a program in Peoria, Illinois, for the gifted and talented that serves as a national test case for gifted education and minority enrollment. It was concluded that referral, identification, and selection were appropriate for the program model but that inequalities resulted from socioeconomic variables. (Author/LMO)
NASA Technical Reports Server (NTRS)
Tuttle, M. E.; Brinson, H. F.
1986-01-01
The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.
Cheng, Tiejun; Li, Qingliang; Wang, Yanli; Bryant, Stephen H
2011-02-28
Aqueous solubility is recognized as a critical parameter in both the early- and late-stage drug discovery. Therefore, in silico modeling of solubility has attracted extensive interests in recent years. Most previous studies have been limited in using relatively small data sets with limited diversity, which in turn limits the predictability of derived models. In this work, we present a support vector machines model for the binary classification of solubility by taking advantage of the largest known public data set that contains over 46 000 compounds with experimental solubility. Our model was optimized in combination with a reduction and recombination feature selection strategy. The best model demonstrated robust performance in both cross-validation and prediction of two independent test sets, indicating it could be a practical tool to select soluble compounds for screening, purchasing, and synthesizing. Moreover, our work may be used for comparative evaluation of solubility classification studies ascribe to the use of completely public resources.
The New Italian Seismic Hazard Model
NASA Astrophysics Data System (ADS)
Marzocchi, W.; Meletti, C.; Albarello, D.; D'Amico, V.; Luzi, L.; Martinelli, F.; Pace, B.; Pignone, M.; Rovida, A.; Visini, F.
2017-12-01
In 2015 the Seismic Hazard Center (Centro Pericolosità Sismica - CPS) of the National Institute of Geophysics and Volcanology was commissioned of coordinating the national scientific community with the aim to elaborate a new reference seismic hazard model, mainly finalized to the update of seismic code. The CPS designed a roadmap for releasing within three years a significantly renewed PSHA model, with regard both to the updated input elements and to the strategies to be followed. The main requirements of the model were discussed in meetings with the experts on earthquake engineering that then will participate to the revision of the building code. The activities were organized in 6 tasks: program coordination, input data, seismicity models, ground motion predictive equations (GMPEs), computation and rendering, testing. The input data task has been selecting the most updated information about seismicity (historical and instrumental), seismogenic faults, and deformation (both from seismicity and geodetic data). The seismicity models have been elaborating in terms of classic source areas, fault sources and gridded seismicity based on different approaches. The GMPEs task has selected the most recent models accounting for their tectonic suitability and forecasting performance. The testing phase has been planned to design statistical procedures to test with the available data the whole seismic hazard models, and single components such as the seismicity models and the GMPEs. In this talk we show some preliminary results, summarize the overall strategy for building the new Italian PSHA model, and discuss in detail important novelties that we put forward. Specifically, we adopt a new formal probabilistic framework to interpret the outcomes of the model and to test it meaningfully; this requires a proper definition and characterization of both aleatory variability and epistemic uncertainty that we accomplish through an ensemble modeling strategy. We use a weighting scheme of the different components of the PSHA model that has been built through three different independent steps: a formal experts' elicitation, the outcomes of the testing phase, and the correlation between the outcomes. Finally, we explore through different techniques the influence on seismic hazard of the declustering procedure.
A Model-Based Approach for Identifying Signatures of Ancient Balancing Selection in Genetic Data
DeGiorgio, Michael; Lohmueller, Kirk E.; Nielsen, Rasmus
2014-01-01
While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates. PMID:25144706
A model-based approach for identifying signatures of ancient balancing selection in genetic data.
DeGiorgio, Michael; Lohmueller, Kirk E; Nielsen, Rasmus
2014-08-01
While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.
Soh, Zu; Nishikawa, Shinya; Kurita, Yuichi; Takiguchi, Noboru; Tsuji, Toshio
2016-01-01
To predict the odor quality of an odorant mixture, the interaction between odorants must be taken into account. Previously, an experiment in which mice discriminated between odorant mixtures identified a selective adaptation mechanism in the olfactory system. This paper proposes an olfactory model for odorant mixtures that can account for selective adaptation in terms of neural activity. The proposed model uses the spatial activity pattern of the mitral layer obtained from model simulations to predict the perceptual similarity between odors. Measured glomerular activity patterns are used as input to the model. The neural interaction between mitral cells and granular cells is then simulated, and a dissimilarity index between odors is defined using the activity patterns of the mitral layer. An odor set composed of three odorants is used to test the ability of the model. Simulations are performed based on the odor discrimination experiment on mice. As a result, we observe that part of the neural activity in the glomerular layer is enhanced in the mitral layer, whereas another part is suppressed. We find that the dissimilarity index strongly correlates with the odor discrimination rate of mice: r = 0.88 (p = 0.019). We conclude that our model has the ability to predict the perceptual similarity of odorant mixtures. In addition, the model also accounts for selective adaptation via the odor discrimination rate, and the enhancement and inhibition in the mitral layer may be related to this selective adaptation.
The effect of call libraries and acoustic filters on the identification of bat echolocation.
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-09-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew J; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys. PMID:25535563
The effect of call libraries and acoustic filters on the identification of bat echolocation
Clement, Matthew; Murray, Kevin L; Solick, Donald I; Gruver, Jeffrey C
2014-01-01
Quantitative methods for species identification are commonly used in acoustic surveys for animals. While various identification models have been studied extensively, there has been little study of methods for selecting calls prior to modeling or methods for validating results after modeling. We obtained two call libraries with a combined 1556 pulse sequences from 11 North American bat species. We used four acoustic filters to automatically select and quantify bat calls from the combined library. For each filter, we trained a species identification model (a quadratic discriminant function analysis) and compared the classification ability of the models. In a separate analysis, we trained a classification model using just one call library. We then compared a conventional model assessment that used the training library against an alternative approach that used the second library. We found that filters differed in the share of known pulse sequences that were selected (68 to 96%), the share of non-bat noises that were excluded (37 to 100%), their measurement of various pulse parameters, and their overall correct classification rate (41% to 85%). Although the top two filters did not differ significantly in overall correct classification rate (85% and 83%), rates differed significantly for some bat species. In our assessment of call libraries, overall correct classification rates were significantly lower (15% to 23% lower) when tested on the second call library instead of the training library. Well-designed filters obviated the need for subjective and time-consuming manual selection of pulses. Accordingly, researchers should carefully design and test filters and include adequate descriptions in publications. Our results also indicate that it may not be possible to extend inferences about model accuracy beyond the training library. If so, the accuracy of acoustic-only surveys may be lower than commonly reported, which could affect ecological understanding or management decisions based on acoustic surveys.
Prediction and visualization of redox conditions in the groundwater of Central Valley, California
NASA Astrophysics Data System (ADS)
Rosecrans, Celia Z.; Nolan, Bernard T.; Gronberg, JoAnn M.
2017-03-01
Regional-scale, three-dimensional continuous probability models, were constructed for aspects of redox conditions in the groundwater system of the Central Valley, California. These models yield grids depicting the probability that groundwater in a particular location will have dissolved oxygen (DO) concentrations less than selected threshold values representing anoxic groundwater conditions, or will have dissolved manganese (Mn) concentrations greater than selected threshold values representing secondary drinking water-quality contaminant levels (SMCL) and health-based screening levels (HBSL). The probability models were constrained by the alluvial boundary of the Central Valley to a depth of approximately 300 m. Probability distribution grids can be extracted from the 3-D models at any desired depth, and are of interest to water-resource managers, water-quality researchers, and groundwater modelers concerned with the occurrence of natural and anthropogenic contaminants related to anoxic conditions. Models were constructed using a Boosted Regression Trees (BRT) machine learning technique that produces many trees as part of an additive model and has the ability to handle many variables, automatically incorporate interactions, and is resistant to collinearity. Machine learning methods for statistical prediction are becoming increasing popular in that they do not require assumptions associated with traditional hypothesis testing. Models were constructed using measured dissolved oxygen and manganese concentrations sampled from 2767 wells within the alluvial boundary of the Central Valley, and over 60 explanatory variables representing regional-scale soil properties, soil chemistry, land use, aquifer textures, and aquifer hydrologic properties. Models were trained on a USGS dataset of 932 wells, and evaluated on an independent hold-out dataset of 1835 wells from the California Division of Drinking Water. We used cross-validation to assess the predictive performance of models of varying complexity, as a basis for selecting final models. Trained models were applied to cross-validation testing data and a separate hold-out dataset to evaluate model predictive performance by emphasizing three model metrics of fit: Kappa; accuracy; and the area under the receiver operator characteristic curve (ROC). The final trained models were used for mapping predictions at discrete depths to a depth of 304.8 m. Trained DO and Mn models had accuracies of 86-100%, Kappa values of 0.69-0.99, and ROC values of 0.92-1.0. Model accuracies for cross-validation testing datasets were 82-95% and ROC values were 0.87-0.91, indicating good predictive performance. Kappas for the cross-validation testing dataset were 0.30-0.69, indicating fair to substantial agreement between testing observations and model predictions. Hold-out data were available for the manganese model only and indicated accuracies of 89-97%, ROC values of 0.73-0.75, and Kappa values of 0.06-0.30. The predictive performance of both the DO and Mn models was reasonable, considering all three of these fit metrics and the low percentages of low-DO and high-Mn events in the data.