Robust model selection and the statistical classification of languages
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. L.
2012-10-01
In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
NASA Astrophysics Data System (ADS)
Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.
2016-04-01
The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.
Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test
ERIC Educational Resources Information Center
Ho, Tsung-Han; Dodd, Barbara G.
2012-01-01
In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.
Ni, Qianwu; Chen, Lei
2017-01-01
Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao
2015-01-01
Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802
Penalized regression procedures for variable selection in the potential outcomes framework
Ghosh, Debashis; Zhu, Yeying; Coffman, Donna L.
2015-01-01
A recent topic of much interest in causal inference is model selection. In this article, we describe a framework in which to consider penalized regression approaches to variable selection for causal effects. The framework leads to a simple ‘impute, then select’ class of procedures that is agnostic to the type of imputation algorithm as well as penalized regression used. It also clarifies how model selection involves a multivariate regression model for causal inference problems, and that these methods can be applied for identifying subgroups in which treatment effects are homogeneous. Analogies and links with the literature on machine learning methods, missing data and imputation are drawn. A difference LASSO algorithm is defined, along with its multiple imputation analogues. The procedures are illustrated using a well-known right heart catheterization dataset. PMID:25628185
Variable selection in subdistribution hazard frailty models with competing risks data
Do Ha, Il; Lee, Minjung; Oh, Seungyoung; Jeong, Jong-Hyeon; Sylvester, Richard; Lee, Youngjo
2014-01-01
The proportional subdistribution hazards model (i.e. Fine-Gray model) has been widely used for analyzing univariate competing risks data. Recently, this model has been extended to clustered competing risks data via frailty. To the best of our knowledge, however, there has been no literature on variable selection method for such competing risks frailty models. In this paper, we propose a simple but unified procedure via a penalized h-likelihood (HL) for variable selection of fixed effects in a general class of subdistribution hazard frailty models, in which random effects may be shared or correlated. We consider three penalty functions (LASSO, SCAD and HL) in our variable selection procedure. We show that the proposed method can be easily implemented using a slight modification to existing h-likelihood estimation approaches. Numerical studies demonstrate that the proposed procedure using the HL penalty performs well, providing a higher probability of choosing the true model than LASSO and SCAD methods without losing prediction accuracy. The usefulness of the new method is illustrated using two actual data sets from multi-center clinical trials. PMID:25042872
Ni, Ai; Cai, Jianwen
2018-07-01
Case-cohort designs are commonly used in large epidemiological studies to reduce the cost associated with covariate measurement. In many such studies the number of covariates is very large. An efficient variable selection method is needed for case-cohort studies where the covariates are only observed in a subset of the sample. Current literature on this topic has been focused on the proportional hazards model. However, in many studies the additive hazards model is preferred over the proportional hazards model either because the proportional hazards assumption is violated or the additive hazards model provides more relevent information to the research question. Motivated by one such study, the Atherosclerosis Risk in Communities (ARIC) study, we investigate the properties of a regularized variable selection procedure in stratified case-cohort design under an additive hazards model with a diverging number of parameters. We establish the consistency and asymptotic normality of the penalized estimator and prove its oracle property. Simulation studies are conducted to assess the finite sample performance of the proposed method with a modified cross-validation tuning parameter selection methods. We apply the variable selection procedure to the ARIC study to demonstrate its practical use.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.
ERIC Educational Resources Information Center
Oberauer, Klaus; Souza, Alessandra S.; Druey, Michel D.; Gade, Miriam
2013-01-01
The article investigates the mechanisms of selecting and updating representations in declarative and procedural working memory (WM). Declarative WM holds the objects of thought available, whereas procedural WM holds representations of what to do with these objects. Both systems consist of three embedded components: activated long-term memory, a…
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Royston, Patrick; Sauerbrei, Willi
2016-01-01
In a recent article, Royston (2015, Stata Journal 15: 275-291) introduced the approximate cumulative distribution (acd) transformation of a continuous covariate x as a route toward modeling a sigmoid relationship between x and an outcome variable. In this article, we extend the approach to multivariable modeling by modifying the standard Stata program mfp. The result is a new program, mfpa, that has all the features of mfp plus the ability to fit a new model for user-selected covariates that we call fp1( p 1 , p 2 ). The fp1( p 1 , p 2 ) model comprises the best-fitting combination of a dimension-one fractional polynomial (fp1) function of x and an fp1 function of acd ( x ). We describe a new model-selection algorithm called function-selection procedure with acd transformation, which uses significance testing to attempt to simplify an fp1( p 1 , p 2 ) model to a submodel, an fp1 or linear model in x or in acd ( x ). The function-selection procedure with acd transformation is related in concept to the fsp (fp function-selection procedure), which is an integral part of mfp and which is used to simplify a dimension-two (fp2) function. We describe the mfpa command and give univariable and multivariable examples with real data to demonstrate its use.
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property.
Storlie, Curtis B; Bondell, Howard D; Reich, Brian J; Zhang, Hao Helen
2011-04-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting.
Surface Estimation, Variable Selection, and the Nonparametric Oracle Property
Storlie, Curtis B.; Bondell, Howard D.; Reich, Brian J.; Zhang, Hao Helen
2010-01-01
Variable selection for multivariate nonparametric regression is an important, yet challenging, problem due, in part, to the infinite dimensionality of the function space. An ideal selection procedure should be automatic, stable, easy to use, and have desirable asymptotic properties. In particular, we define a selection procedure to be nonparametric oracle (np-oracle) if it consistently selects the correct subset of predictors and at the same time estimates the smooth surface at the optimal nonparametric rate, as the sample size goes to infinity. In this paper, we propose a model selection procedure for nonparametric models, and explore the conditions under which the new method enjoys the aforementioned properties. Developed in the framework of smoothing spline ANOVA, our estimator is obtained via solving a regularization problem with a novel adaptive penalty on the sum of functional component norms. Theoretical properties of the new estimator are established. Additionally, numerous simulated and real examples further demonstrate that the new approach substantially outperforms other existing methods in the finite sample setting. PMID:21603586
Proposing a Comprehensive Model for Identifying Teaching Candidates
ERIC Educational Resources Information Center
Bowles, Terry; Hattie, John; Dinham, Stephen; Scull, Janet; Clinton, Janet
2014-01-01
Teacher education in universities continues to diversify in the twenty-first century. Just as course offerings, course delivery, staffing and the teaching/research mix varies extensively from university to university so does the procedure for pre-service teacher selection. Various factors bear on selection procedures and practices however few…
Two Paradoxes in Linear Regression Analysis.
Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong
2016-12-25
Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.
Augmented Self-Modeling as a Treatment for Children with Selective Mutism.
ERIC Educational Resources Information Center
Kehle, Thomas J.; Madaus, Melissa R.; Baratta, Victoria S.; Bray, Melissa A.
1998-01-01
Describes the treatment of three children experiencing selective mutism. The procedure utilized incorporated self-modeling, mystery motivators, self-reinforcement, stimulus fading, spacing, and antidepressant medication. All three children evidenced a complete cessation of selective mutism and maintained their treatment gains at follow-up.…
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
NASA Astrophysics Data System (ADS)
Frollo, Ivan; Krafčík, Andrej; Andris, Peter; Přibil, Jiří; Dermek, Tomáš
2015-12-01
Circular samples are the frequent objects of "in-vitro" investigation using imaging method based on magnetic resonance principles. The goal of our investigation is imaging of thin planar layers without using the slide selection procedure, thus only 2D imaging or imaging of selected layers of samples in circular vessels, eppendorf tubes,.. compulsorily using procedure "slide selection". In spite of that the standard imaging methods was used, some specificity arise when mathematical modeling of these procedure is introduced. In the paper several mathematical models were presented that were compared with real experimental results. Circular magnetic samples were placed into the homogenous magnetic field of a low field imager based on nuclear magnetic resonance. For experimental verification an MRI 0.178 Tesla ESAOTE Opera imager was used.
ERIC Educational Resources Information Center
Reckase, Mark D.
Latent trait model calibration procedures were used on data obtained from a group testing program. The one-parameter model of Wright and Panchapakesan and the three-parameter logistic model of Wingersky, Wood, and Lord were selected for comparison. These models and their corresponding estimation procedures were compared, using actual and simulated…
Two Paradoxes in Linear Regression Analysis
FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong
2016-01-01
Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214
Linear and nonlinear variable selection in competing risks data.
Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng
2018-06-15
Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.
FORECAST MODEL FOR MODERATE EARTHQUAKES NEAR PARKFIELD, CALIFORNIA.
Stuart, William D.; Archuleta, Ralph J.; Lindh, Allan G.
1985-01-01
The paper outlines a procedure for using an earthquake instability model and repeated geodetic measurements to attempt an earthquake forecast. The procedure differs from other prediction methods, such as recognizing trends in data or assuming failure at a critical stress level, by using a self-contained instability model that simulates both preseismic and coseismic faulting in a natural way. In short, physical theory supplies a family of curves, and the field data select the member curves whose continuation into the future constitutes a prediction. Model inaccuracy and resolving power of the data determine the uncertainty of the selected curves and hence the uncertainty of the earthquake time.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Patterson, Olga V; Forbush, Tyler B; Saini, Sameer D; Moser, Stephanie E; DuVall, Scott L
2015-01-01
In order to measure the level of utilization of colonoscopy procedures, identifying the primary indication for the procedure is required. Colonoscopies may be utilized not only for screening, but also for diagnostic or therapeutic purposes. To determine whether a colonoscopy was performed for screening, we created a natural language processing system to identify colonoscopy reports in the electronic medical record system and extract indications for the procedure. A rule-based model and three machine-learning models were created using 2,000 manually annotated clinical notes of patients cared for in the Department of Veterans Affairs. Performance of the models was measured and compared. Analysis of the models on a test set of 1,000 documents indicates that the rule-based system performance stays fairly constant as evaluated on training and testing sets. However, the machine learning model without feature selection showed significant decrease in performance. Therefore, rule-based classification system appears to be more robust than a machine-learning system in cases when no feature selection is performed.
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
Van Iddekinge, Chad H; Ferris, Gerald R; Perrewé, Pamela L; Blass, Fred R; Heetderks, Thomas D; Perryman, Alexa A
2009-07-01
Surprisingly few data exist concerning whether and how utilization of job-related selection and training procedures affects different aspects of unit or organizational performance over time. The authors used longitudinal data from a large fast-food organization (N = 861 units) to examine how change in use of selection and training relates to change in unit performance. Latent growth modeling analyses revealed significant variation in both the use and the change in use of selection and training across units. Change in selection and training was related to change in 2 proximal unit outcomes: customer service performance and retention. Change in service performance, in turn, was related to change in the more distal outcome of unit financial performance (i.e., profits). Selection and training also affected financial performance, both directly and indirectly (e.g., through service performance). Finally, results of a cross-lagged panel analysis suggested the existence of a reciprocal causal relationship between the utilization of the human resources practices and unit performance. However, there was some evidence to suggest that selection and training may be associated with different causal sequences, such that use of the training procedure appeared to lead to unit performance, whereas unit performance appeared to lead to use of the selection procedure.
A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.
Oztürk, Necla; Tozan, Hakan
2015-01-01
Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.
CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS
Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.
2012-01-01
In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388
The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.
2013-01-01
We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.
1990-03-01
and M.H. Knuter. Applied Linear Regression Models. Homewood IL: Richard D. Erwin Inc., 1983. Pritsker, A. Alan B. Introduction to Simulation and SLAM...Control Variates in Simulation," European Journal of Operational Research, 42: (1989). Neter, J., W. Wasserman, and M.H. Xnuter. Applied Linear Regression Models
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
Efficient robust doubly adaptive regularized regression with applications.
Karunamuni, Rohana J; Kong, Linglong; Tu, Wei
2018-01-01
We consider the problem of estimation and variable selection for general linear regression models. Regularized regression procedures have been widely used for variable selection, but most existing methods perform poorly in the presence of outliers. We construct a new penalized procedure that simultaneously attains full efficiency and maximum robustness. Furthermore, the proposed procedure satisfies the oracle properties. The new procedure is designed to achieve sparse and robust solutions by imposing adaptive weights on both the decision loss and the penalty function. The proposed method of estimation and variable selection attains full efficiency when the model is correct and, at the same time, achieves maximum robustness when outliers are present. We examine the robustness properties using the finite-sample breakdown point and an influence function. We show that the proposed estimator attains the maximum breakdown point. Furthermore, there is no loss in efficiency when there are no outliers or the error distribution is normal. For practical implementation of the proposed method, we present a computational algorithm. We examine the finite-sample and robustness properties using Monte Carlo studies. Two datasets are also analyzed.
Testing Different Model Building Procedures Using Multiple Regression.
ERIC Educational Resources Information Center
Thayer, Jerome D.
The stepwise regression method of selecting predictors for computer assisted multiple regression analysis was compared with forward, backward, and best subsets regression, using 16 data sets. The results indicated the stepwise method was preferred because of its practical nature, when the models chosen by different selection methods were similar…
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Craig, Marlies H; Sharp, Brian L; Mabaso, Musawenkosi LH; Kleinschmidt, Immo
2007-01-01
Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa) project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have produced a highly plausible and parsimonious model of historical malaria risk for Botswana from point-referenced data from a 1961/2 prevalence survey of malaria infection in 1–14 year old children. After starting with a list of 50 potential variables we ended with three highly plausible predictors, by applying a systematic and repeatable staged variable selection procedure that included a spatial analysis, which has application for other environmentally determined infectious diseases. All this was accomplished using general-purpose statistical software. PMID:17892584
Nishii, Takashi; Genkawa, Takuma; Watari, Masahiro; Ozaki, Yukihiro
2012-01-01
A new selection procedure of an informative near-infrared (NIR) region for regression model building is proposed that uses an online NIR/mid-infrared (mid-IR) dual-region spectrometer in conjunction with two-dimensional (2D) NIR/mid-IR heterospectral correlation spectroscopy. In this procedure, both NIR and mid-IR spectra of a liquid sample are acquired sequentially during a reaction process using the NIR/mid-IR dual-region spectrometer; the 2D NIR/mid-IR heterospectral correlation spectrum is subsequently calculated from the obtained spectral data set. From the calculated 2D spectrum, a NIR region is selected that includes bands of high positive correlation intensity with mid-IR bands assigned to the analyte, and used for the construction of a regression model. To evaluate the performance of this procedure, a partial least-squares (PLS) regression model of the ethanol concentration in a fermentation process was constructed. During fermentation, NIR/mid-IR spectra in the 10000 - 1200 cm(-1) region were acquired every 3 min, and a 2D NIR/mid-IR heterospectral correlation spectrum was calculated to investigate the correlation intensity between the NIR and mid-IR bands. NIR regions that include bands at 4343, 4416, 5778, 5904, and 5955 cm(-1), which result from the combinations and overtones of the C-H group of ethanol, were selected for use in the PLS regression models, by taking the correlation intensity of a mid-IR band at 2985 cm(-1) arising from the CH(3) asymmetric stretching vibration mode of ethanol as a reference. The predicted results indicate that the ethanol concentrations calculated from the PLS regression models fit well to those obtained by high-performance liquid chromatography. Thus, it can be concluded that the selection procedure using the NIR/mid-IR dual-region spectrometer combined with 2D NIR/mid-IR heterospectral correlation spectroscopy is a powerful method for the construction of a reliable regression model.
Wrong Answers on Multiple-Choice Achievement Tests: Blind Guesses or Systematic Choices?.
ERIC Educational Resources Information Center
Powell, J. C.
A multi-faceted model for the selection of answers for multiple-choice tests was developed from the findings of a series of exploratory studies. This model implies that answer selection should be curvilinear. A series of models were tested for fit using the chi square procedure. Data were collected from 359 elementary school students ages 9-12.…
ERIC Educational Resources Information Center
National Council on Crime and Delinquency, Hackensack, NJ. NewGate Resource Center.
A guide is provided for establishing a college-level education program for inmates of correctional institutions based on the NewGate concept. Necessary first steps are evaluation of current facilities, selection of the sponsoring agency, and selection of the student body. Guidelines for student selection deal with application procedure, record…
IJS procedure for RELAP5 to TRACE input model conversion using SNAP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prosek, A.; Berar, O. A.
2012-07-01
The TRAC/RELAP Advanced Computational Engine (TRACE) advanced, best-estimate reactor systems code developed by the U.S. Nuclear Regulatory Commission comes with a graphical user interface called Symbolic Nuclear Analysis Package (SNAP). Much of efforts have been done in the past to develop the RELAP5 input decks. The purpose of this study is to demonstrate the Institut 'Josef Stefan' (IJS) conversion procedure from RELAP5 to TRACE input model of BETHSY facility. The IJS conversion procedure consists of eleven steps and is based on the use of SNAP. For calculations of the selected BETHSY 6.2TC test the RELAP5/MOD3.3 Patch 4 and TRACE V5.0more » Patch 1 were used. The selected BETHSY 6.2TC test was 15.24 cm equivalent diameter horizontal cold leg break in the reference pressurized water reactor without high pressure and low pressure safety injection. The application of the IJS procedure for conversion of BETHSY input model showed that it is important to perform the steps in proper sequence. The overall calculated results obtained with TRACE using the converted RELAP5 model were close to experimental data and comparable to RELAP5/MOD3.3 calculations. Therefore it can be concluded, that proposed IJS conversion procedure was successfully demonstrated on the BETHSY integral test facility input model. (authors)« less
ERIC Educational Resources Information Center
Downing, David L.
2009-01-01
This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…
Policy Building--An Extension to User Modeling
ERIC Educational Resources Information Center
Yudelson, Michael V.; Brunskill, Emma
2012-01-01
In this paper we combine a logistic regression student model with an exercise selection procedure. As opposed to the body of prior work on strategies for selecting practice opportunities, we are working on an assumption of a finite amount of opportunities to teach the student. Our goal is to prescribe activities that would maximize the amount…
10 CFR 431.135 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Automatic Commercial Ice Makers Test Procedures § 431.135 Units to be tested. For each basic model of automatic commercial ice maker selected for testing, a sample of sufficient size shall be selected...
Variable selection in discrete survival models including heterogeneity.
Groll, Andreas; Tutz, Gerhard
2017-04-01
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.
Continuous Shape Estimation of Continuum Robots Using X-ray Images
Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron
2015-01-01
We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960
Continuous Shape Estimation of Continuum Robots Using X-ray Images.
Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron
2013-05-06
We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.
A Common Capacity Limitation for Response and Item Selection in Working Memory
ERIC Educational Resources Information Center
Janczyk, Markus
2017-01-01
Successful completion of any cognitive task requires selecting a particular action and the object the action is applied to. Oberauer (2009) suggested a working memory (WM) model comprising a declarative and a procedural part with analogous structures. One important assumption of this model is that both parts work independently of each other, and…
A smart Monte Carlo procedure for production costing and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, C.; Stremel, J.
1996-11-01
Electric utilities using chronological production costing models to decide whether to buy or sell power over the next week or next few weeks need to determine potential profits or losses under a number of uncertainties. A large amount of money can be at stake--often $100,000 a day or more--and one party of the sale must always take on the risk. In the case of fixed price ($/MWh) contracts, the seller accepts the risk. In the case of cost plus contracts, the buyer must accept the risk. So, modeling uncertainty and understanding the risk accurately can improve the competitive edge ofmore » the user. This paper investigates an efficient procedure for representing risks and costs from capacity outages. Typically, production costing models use an algorithm based on some form of random number generator to select resources as available or on outage. These algorithms allow experiments to be repeated and gains and losses to be observed in a short time. The authors perform several experiments to examine the capability of three unit outage selection methods and measures their results. Specifically, a brute force Monte Carlo procedure, a Monte Carlo procedure with Latin Hypercube sampling, and a Smart Monte Carlo procedure with cost stratification and directed sampling are examined.« less
On spatial mutation-selection models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de; Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu; Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
2013-11-15
We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.
NASA Technical Reports Server (NTRS)
Amling, G. E.; Holms, A. G.
1973-01-01
A computer program is described that performs a statistical multiple-decision procedure called chain pooling. It uses a number of mean squares assigned to error variance that is conditioned on the relative magnitudes of the mean squares. The model selection is done according to user-specified levels of type 1 or type 2 error probabilities.
NASA Technical Reports Server (NTRS)
1976-01-01
Full size Tug LO2 and LH2 tank configurations were defined, based on selected tank geometries. These configurations were then locally modeled for computer stress analysis. A large subscale test tank, representing the selected Tug LO2 tank, was designed and analyzed. This tank was fabricated using procedures which represented production operations. An evaluation test program was outlined and a test procedure defined. The necessary test hardware was also fabricated.
Forecasting volatility with neural regression: a contribution to model adequacy.
Refenes, A N; Holt, W T
2001-01-01
Neural nets' usefulness for forecasting is limited by problems of overfitting and the lack of rigorous procedures for model identification, selection and adequacy testing. This paper describes a methodology for neural model misspecification testing. We introduce a generalization of the Durbin-Watson statistic for neural regression and discuss the general issues of misspecification testing using residual analysis. We derive a generalized influence matrix for neural estimators which enables us to evaluate the distribution of the statistic. We deploy Monte Carlo simulation to compare the power of the test for neural and linear regressors. While residual testing is not a sufficient condition for model adequacy, it is nevertheless a necessary condition to demonstrate that the model is a good approximation to the data generating process, particularly as neural-network estimation procedures are susceptible to partial convergence. The work is also an important step toward developing rigorous procedures for neural model identification, selection and adequacy testing which have started to appear in the literature. We demonstrate its applicability in the nontrivial problem of forecasting implied volatility innovations using high-frequency stock index options. Each step of the model building process is validated using statistical tests to verify variable significance and model adequacy with the results confirming the presence of nonlinear relationships in implied volatility innovations.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1982-01-01
A previous report described a backward deletion procedure of model selection that was optimized for minimum prediction error and which used a multiparameter combination of the F - distribution and an order statistics distribution of Cochran's. A computer program is described that applies the previously optimized procedure to real data. The use of the program is illustrated by examples.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
Current Trends in Distance Education: An Administrative Model
ERIC Educational Resources Information Center
Compora, Daniel P.
2003-01-01
Current practices and procedures of distance education programs at selected institutions in higher education in Ohio were studied. Relevant data was found in the areas of: (1) content of the distance education program's mission statement; (2) needs assessment procedures; (3) student demographics; (4) course acquisition, development, and evaluation…
Ließ, Mareike; Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils' carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms-including the model tuning and predictor selection-were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models' predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction.
Sample selection in foreign similarity regions for multicrop experiments
NASA Technical Reports Server (NTRS)
Malin, J. T. (Principal Investigator)
1981-01-01
The selection of sample segments in the U.S. foreign similarity regions for development of proportion estimation procedures and error modeling for Argentina, Australia, Brazil, and USSR in AgRISTARS is described. Each sample was chosen to be similar in crop mix to the corresponding indicator region sample. Data sets, methods of selection, and resulting samples are discussed.
Design of high-fidelity haptic display for one-dimensional force reflection applications
NASA Astrophysics Data System (ADS)
Gillespie, Brent; Rosenberg, Louis B.
1995-12-01
This paper discusses the development of a virtual reality platform for the simulation of medical procedures which involve needle insertion into human tissue. The paper's focus is the hardware and software requirements for haptic display of a particular medical procedure known as epidural analgesia. To perform this delicate manual procedure, an anesthesiologist must carefully guide a needle through various layers of tissue using only haptic cues for guidance. As a simplifying aspect for the simulator design, all motions and forces involved in the task occur along a fixed line once insertion begins. To create a haptic representation of this procedure, we have explored both physical modeling and perceptual modeling techniques. A preliminary physical model was built based on CT-scan data of the operative site. A preliminary perceptual model was built based on current training techniques for the procedure provided by a skilled instructor. We compare and contrast these two modeling methods and discuss the implications of each. We select and defend the perceptual model as a superior approach for the epidural analgesia simulator.
Roberts, Steven; Martin, Michael A
2010-01-01
Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.
Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
24 CFR 983.51 - Owner proposal selection procedures.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 4 2011-04-01 2011-04-01 false Owner proposal selection procedures... proposal selection procedures. (a) Procedures for selecting PBV proposals. The PHA administrative plan must describe the procedures for owner submission of PBV proposals and for PHA selection of PBV proposals...
Modelling municipal solid waste generation: a review.
Beigl, Peter; Lebersorger, Sandra; Salhofer, Stefan
2008-01-01
The objective of this paper is to review previously published models of municipal solid waste generation and to propose an implementation guideline which will provide a compromise between information gain and cost-efficient model development. The 45 modelling approaches identified in a systematic literature review aim at explaining or estimating the present or future waste generation using economic, socio-demographic or management-orientated data. A classification was developed in order to categorise these highly heterogeneous models according to the following criteria--the regional scale, the modelled waste streams, the hypothesised independent variables and the modelling method. A procedural practice guideline was derived from a discussion of the underlying models in order to propose beneficial design options concerning regional sampling (i.e., number and size of observed areas), waste stream definition and investigation, selection of independent variables and model validation procedures. The practical application of the findings was demonstrated with two case studies performed on different regional scales, i.e., on a household and on a city level. The findings of this review are finally summarised in the form of a relevance tree for methodology selection.
ERIC Educational Resources Information Center
Van Iddekinge, Chad H.; Ferris, Gerald R.; Perrewe, Pamela L.; Perryman, Alexa A.; Blass, Fred R.; Heetderks, Thomas D.
2009-01-01
Surprisingly few data exist concerning whether and how utilization of job-related selection and training procedures affects different aspects of unit or organizational performance over time. The authors used longitudinal data from a large fast-food organization (N = 861 units) to examine how change in use of selection and training relates to…
A Rapid Approach to Modeling Species-Habitat Relationships
NASA Technical Reports Server (NTRS)
Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.
2005-01-01
A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.
STOL Traffic environment and operational procedures
NASA Technical Reports Server (NTRS)
Schlundt, R. W.; Dewolf, R. W.; Ausrotas, R. A.; Curry, R. E.; Demaio, D.; Keene, D. W.; Speyer, J. L.; Weinreich, M.; Zeldin, S.
1972-01-01
The expected traffic environment for an intercity STOL transportation system is examined, and operational procedures are discussed in order to identify problem areas which impact STOL avionics requirements. Factors considered include: traffic densities, STOL/CTOL/VTOL traffic mix, the expect ATC environment, aircraft noise models and community noise models and community noise impact, flight paths for noise abatement, wind considerations affecting landing, approach and landing considerations, STOLport site selection, runway capacity, and STOL operations at jetports, suburban airports, and separate STOLports.
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel
2013-12-01
This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.
48 CFR 715.370 - Alternative source selection procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... selection procedures. 715.370 Section 715.370 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715.370 Alternative source selection procedures. The following selection procedures may be used, when...
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
Dynamical properties of maps fitted to data in the noise-free limit
Lindström, Torsten
2013-01-01
We argue that any attempt to classify dynamical properties from nonlinear finite time-series data requires a mechanistic model fitting the data better than piecewise linear models according to standard model selection criteria. Such a procedure seems necessary but still not sufficient. PMID:23768079
Time Series ARIMA Models of Undergraduate Grade Point Average.
ERIC Educational Resources Information Center
Rogers, Bruce G.
The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…
Schmidt, Johannes; Glaser, Bruno
2016-01-01
Tropical forests are significant carbon sinks and their soils’ carbon storage potential is immense. However, little is known about the soil organic carbon (SOC) stocks of tropical mountain areas whose complex soil-landscape and difficult accessibility pose a challenge to spatial analysis. The choice of methodology for spatial prediction is of high importance to improve the expected poor model results in case of low predictor-response correlations. Four aspects were considered to improve model performance in predicting SOC stocks of the organic layer of a tropical mountain forest landscape: Different spatial predictor settings, predictor selection strategies, various machine learning algorithms and model tuning. Five machine learning algorithms: random forests, artificial neural networks, multivariate adaptive regression splines, boosted regression trees and support vector machines were trained and tuned to predict SOC stocks from predictors derived from a digital elevation model and satellite image. Topographical predictors were calculated with a GIS search radius of 45 to 615 m. Finally, three predictor selection strategies were applied to the total set of 236 predictors. All machine learning algorithms—including the model tuning and predictor selection—were compared via five repetitions of a tenfold cross-validation. The boosted regression tree algorithm resulted in the overall best model. SOC stocks ranged between 0.2 to 17.7 kg m-2, displaying a huge variability with diffuse insolation and curvatures of different scale guiding the spatial pattern. Predictor selection and model tuning improved the models’ predictive performance in all five machine learning algorithms. The rather low number of selected predictors favours forward compared to backward selection procedures. Choosing predictors due to their indiviual performance was vanquished by the two procedures which accounted for predictor interaction. PMID:27128736
Lievens, Filip; Sackett, Paul R
2017-01-01
Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Modernizing Selection and Promotion Procedures in the State Employment Security Service Agency.
ERIC Educational Resources Information Center
Derryck, Dennis A.; Leyes, Richard
The purpose of this feasibility study was to discover the types ofselection and promotion models, strategies, and processes that must be employed if current State Employment Security Service Agency selection practices are to be made more directly relevant to the various populations currently being served. Specifically, the study sought to…
Sex Role Learning: A Test of the Selective Attention Hypothesis.
ERIC Educational Resources Information Center
Bryan, Janice Westlund; Luria, Zella
This paper reports three studies designed to determine whether children show selective attention and/or differential memory to slide pictures of same-sex vs. opposite-sex models and activities. Attention was measured using a feedback EEG procedure, which measured the presence or absence of alpha rhythms in the subjects' brains during presentation…
Conditional Covariance-Based Subtest Selection for DIMTEST
ERIC Educational Resources Information Center
Froelich, Amy G.; Habing, Brian
2008-01-01
DIMTEST is a nonparametric hypothesis-testing procedure designed to test the assumptions of a unidimensional and locally independent item response theory model. Several previous Monte Carlo studies have found that using linear factor analysis to select the assessment subtest for DIMTEST results in a moderate to severe loss of power when the exam…
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.
Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi
2015-04-22
Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.
Feature Screening in Ultrahigh Dimensional Cox's Model.
Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne
Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.
Demirarslan, K Onur; Korucu, M Kemal; Karademir, Aykan
2016-08-01
Ecological problems arising after the construction and operation of a waste incineration plant generally originate from incorrect decisions made during the selection of the location of the plant. The main objective of this study is to investigate how the selection method for the location of a new municipal waste incineration plant can be improved by using a dispersion modelling approach supported by geographical information systems and multi-criteria decision analysis. Considering this aim, the appropriateness of the current location of an existent plant was assessed by applying a pollution dispersion model. Using this procedure, the site ranking for a total of 90 candidate locations and the site of the existing incinerator were determined by a new location selection practice and the current place of the plant was evaluated by ANOVA and Tukey tests. This ranking, made without the use of modelling approaches, was re-evaluated based on the modelling of various variables, including the concentration of pollutants, population and population density, demography, temporality of meteorological data, pollutant type, risk formation type by CALPUFF and re-ranking the results. The findings clearly indicate the impropriety of the location of the current plant, as the pollution distribution model showed that its location was the fourth-worst choice among 91 possibilities. It was concluded that the location selection procedures for waste incinerators should benefit from the improvements obtained by the articulation of pollution dispersion studies combined with the population density data to obtain the most suitable location. © The Author(s) 2016.
Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing
NASA Astrophysics Data System (ADS)
Rabbitt, Christopher
This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.
Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)
ERIC Educational Resources Information Center
Yavuz, Guler; Hambleton, Ronald K.
2017-01-01
Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…
Torija, Antonio J; Ruiz, Diego P
2015-02-01
The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
The multicategory case of the sequential Bayesian pixel selection and estimation procedure
NASA Technical Reports Server (NTRS)
Pore, M. D.; Dennis, T. B. (Principal Investigator)
1980-01-01
A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.
Performance optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Walsh, Joanne L.
1991-01-01
As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
Sweeney, Mary M.; Shahan, Timothy A.
2016-01-01
Resurgence following removal of alternative reinforcement has been studied in non-human animals, children with developmental disabilities, and typically functioning adults. Adult human laboratory studies have included responses without a controlled history of reinforcement, included only two response options, or involved extensive training. Arbitrary responses allow for control over history of reinforcement. Including an inactive response never associated with reinforcement allows the conclusion that resurgence exceeds extinction-induced variability. Although procedures with extensive training produce reliable resurgence, a brief procedure with the same experimental control would allow more efficient examination of resurgence in adult humans. We tested the acceptability of a brief, single-session, three-alternative forced-choice procedure as a model of resurgence in undergraduates. Selecting a shape was the target response (reinforced in Phase I), selecting another shape was the alternative response (reinforced in Phase II), and selecting a third shape was never reinforced. Despite manipulating number of trials and probability of reinforcement, resurgence of the target response did not consistently exceed increases in the inactive response. Our findings reiterate the importance of an inactive control response and call for reexamination of resurgence studies using only two response options. We discuss potential approaches to generate an acceptable, brief human laboratory resurgence procedure. PMID:26724752
Syed, Zeeshan; Moscucci, Mauro; Share, David; Gurm, Hitinder S
2015-01-01
Background Clinical tools to stratify patients for emergency coronary artery bypass graft (ECABG) after percutaneous coronary intervention (PCI) create the opportunity to selectively assign patients undergoing procedures to hospitals with and without onsite surgical facilities for dealing with potential complications while balancing load across providers. The goal of our study was to investigate the feasibility of a computational model directly optimised for cohort-level performance to predict ECABG in PCI patients for this application. Methods Blue Cross Blue Shield of Michigan Cardiovascular Consortium registry data with 69 pre-procedural and angiographic risk variables from 68 022 PCI procedures in 2004–2007 were used to develop a support vector machine (SVM) model for ECABG. The SVM model was optimised for the area under the receiver operating characteristic curve (AUROC) at the level of the training cohort and validated on 42 310 PCI procedures performed in 2008–2009. Results There were 87 cases of ECABG (0.21%) in the validation cohort. The SVM model achieved an AUROC of 0.81 (95% CI 0.76 to 0.86). Patients in the predicted top decile were at a significantly increased risk relative to the remaining patients (OR 9.74, 95% CI 6.39 to 14.85, p<0.001) for ECABG. The SVM model optimised for the AUROC on the training cohort significantly improved discrimination, net reclassification and calibration over logistic regression and traditional SVM classification optimised for univariate performance. Conclusions Computational risk stratification directly optimising cohort-level performance holds the potential of high levels of discrimination for ECABG following PCI. This approach has value in selectively referring PCI patients to hospitals with and without onsite surgery. PMID:26688738
Code of Federal Regulations, 2013 CFR
2013-07-01
... selection procedures and discrimination. 1607.3 Section 1607.3 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.3 Discrimination defined: Relationship between use of selection procedures and...
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Code of Federal Regulations, 2014 CFR
2014-01-01
... chapter. (iii) Bylaw provisions that adopt the language of the model or optional bylaws in OTS's... body or bodies of law selected for its corporate governance procedures, and shall file a copy of such...
Code of Federal Regulations, 2012 CFR
2012-01-01
... chapter. (iii) Bylaw provisions that adopt the language of the model or optional bylaws in OTS's... body or bodies of law selected for its corporate governance procedures, and shall file a copy of such...
Code of Federal Regulations, 2013 CFR
2013-01-01
... chapter. (iii) Bylaw provisions that adopt the language of the model or optional bylaws in OTS's... body or bodies of law selected for its corporate governance procedures, and shall file a copy of such...
10 CFR 431.295 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Refrigerated Bottled or Canned Beverage Vending Machines Test Procedures § 431.295 Units to be tested. For each basic model of refrigerated bottled or canned beverage vending machine selected for...
ERIC Educational Resources Information Center
McLean, Gary N.; Jones, L. Eugene
The two studies which received the 1975 Robert E. Slaughter Research Award in Business and Office Education are summarized in the document. The first paper, entitled "Effectiveness of Model Office, Cooperative Office Education, and Office Procedures Courses Based on Employee Satisfaction and Satisfactoriness Eighteen Months After…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-03
... Regulatory Research, U.S. Nuclear Regulatory Commission, Washington DC 20555-0001; telephone: 301-251-7445... relevant modeling factors to accompany descriptive material for the one or more models submitted by an..., Division of Engineering, Office of Nuclear Regulatory Research. [FR Doc. 2013-07702 Filed 4-2-13; 8:45 am...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1988-12-01
This document contains twelve papers on various aspects of low-level radioactive waste management. Topics of this volume include: performance assessment methodology; remedial action alternatives; site selection and site characterization procedures; intruder scenarios; sensitivity analysis procedures; mathematical models for mixed waste environmental transport; and risk assessment methodology. Individual papers were processed separately for the database. (TEM)
Building Energy Model Development for Retrofit Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chasar, David; McIlvaine, Janet; Blanchard, Jeremy
2012-09-30
Based on previous research conducted by Pacific Northwest National Laboratory and Florida Solar Energy Center providing technical assistance to implement 22 deep energy retrofits across the nation, 6 homes were selected in Florida and Texas for detailed post-retrofit energy modeling to assess realized energy savings (Chandra et al, 2012). However, assessing realized savings can be difficult for some homes where pre-retrofit occupancy and energy performance are unknown. Initially, savings had been estimated using a HERS Index comparison for these homes. However, this does not account for confounding factors such as occupancy and weather. This research addresses a method to moremore » reliably assess energy savings achieved in deep energy retrofits for which pre-retrofit utility bills or occupancy information in not available. A metered home, Riverdale, was selected as a test case for development of a modeling procedure to account occupancy and weather factors, potentially creating more accurate estimates of energy savings. This “true up” procedure was developed using Energy Gauge USA software and post-retrofit homeowner information and utility bills. The 12 step process adjusts the post-retrofit modeling results to correlate with post-retrofit utility bills and known occupancy information. The “trued” post retrofit model is then used to estimate pre-retrofit energy consumption by changing the building efficiency characteristics to reflect the pre-retrofit condition, but keeping all weather and occupancy-related factors the same. This creates a pre-retrofit model that is more comparable to the post-retrofit energy use profile and can improve energy savings estimates. For this test case, a home for which pre- and post- retrofit utility bills were available was selected for comparison and assessment of the accuracy of the “true up” procedure. Based on the current method, this procedure is quite time intensive. However, streamlined processing spreadsheets or incorporation into existing software tools would improve the efficiency of the process. Retrofit activity appears to be gaining market share, and this would be a potentially valuable capability with relevance to marketing, program management, and retrofit success metrics.« less
A multiscale Markov random field model in wavelet domain for image segmentation
NASA Astrophysics Data System (ADS)
Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan
2017-07-01
The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.
Recollection and familiarity in amnesic mild cognitive impairment.
Serra, Laura; Bozzali, Marco; Cercignani, Mara; Perri, Roberta; Fadda, Lucia; Caltagirone, Carlo; Carlesimo, Giovanni A
2010-05-01
To investigate whether, in patients with amnesic mild cognitive impairment (a-MCI), recognition deficits are mainly due to a selective impairment of recollection rather than familiarity. Nineteen patients with a-MCI and 23 sex-, age-, and education-matched healthy controls underwent two experimental investigations, using the Process Dissociation Procedure (PDP) and the Remember/Know (R/K) procedure, to assess the differential contribution of recollection and familiarity to their recognition performance. Both experimental procedures revealed a selective preservation of familiarity in a-MCI patients. Moreover, the R/K procedure showed a statistically significant impairment of recollection in a-MCI patients for words that were either read or anagrammed during the study phase. A-MCI is known to be commonly associated with a high risk of conversion to Alzheimer's disease (AD). Several previous studies have demonstrated a characteristic impairment of episodic memory in a-MCI, with an early dysfunction of recognition. Our findings are consistent with the knowledge of neurodegeneration occurring in AD, which is characterized, at the earliest disease stages, by a selective involvement of the entorhinal cortex. Moreover, the current study supports the dual process model of recognition, which hypothesizes recollection and familiarity to be independent processes associated with distinct anatomical substrates.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur
2016-03-01
This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.
Knowledge-Based Manufacturing and Structural Design for a High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Marx, William J.; Mavris, Dimitri N.; Schrage, Daniel P.
1994-01-01
The aerospace industry is currently addressing the problem of integrating manufacturing and design. To address the difficulties associated with using many conventional procedural techniques and algorithms, one feasible way to integrate the two concepts is with the development of an appropriate Knowledge-Based System (KBS). The authors present their reasons for selecting a KBS to integrate design and manufacturing. A methodology for an aircraft producibility assessment is proposed, utilizing a KBS for manufacturing process selection, that addresses both procedural and heuristic aspects of designing and manufacturing of a High Speed Civil Transport (HSCT) wing. A cost model is discussed that would allow system level trades utilizing information describing the material characteristics as well as the manufacturing process selections. Statements of future work conclude the paper.
Intelligent Exit-Selection Behaviors during a Room Evacuation
NASA Astrophysics Data System (ADS)
Zarita, Zainuddin; Lim Eng, Aik
2012-01-01
A modified version of the existing cellular automata (CA) model is proposed to simulate an evacuation procedure in a classroom with and without obstacles. Based on the numerous literature on the implementation of CA in modeling evacuation motions, it is notable that most of the published studies do not take into account the pedestrian's ability to select the exit route in their models. To resolve these issues, we develop a CA model incorporating a probabilistic neural network for determining the decision-making ability of the pedestrians, and simulate an exit-selection phenomenon in the simulation. Intelligent exit-selection behavior is observed in our model. From the simulation results, it is observed that occupants tend to select the exit closest to them when the density is low, but if the density is high they will go to an alternative exit so as to avoid a long wait. This reflects the fact that occupants may not fully utilize multiple exits during evacuation. The improvement in our proposed model is valuable for further study and for upgrading the safety aspects of building designs.
Inverse probability weighting for covariate adjustment in randomized studies
Li, Xiaochun; Li, Lingling
2013-01-01
SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458
Inverse probability weighting for covariate adjustment in randomized studies.
Shen, Changyu; Li, Xiaochun; Li, Lingling
2014-02-20
Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.
Robust Variable Selection with Exponential Squared Loss.
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-04-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.
Robust Variable Selection with Exponential Squared Loss
Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping
2013-01-01
Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996
Technical Aspects for the Creation of a Multi-Dimensional Land Information System
NASA Astrophysics Data System (ADS)
Ioannidis, Charalabos; Potsiou, Chryssy; Soile, Sofia; Verykokou, Styliani; Mourafetis, George; Doulamis, Nikolaos
2016-06-01
The complexity of modern urban environments and civil demands for fast, reliable and affordable decision-making requires not only a 3D Land Information System, which tends to replace traditional 2D LIS architectures, but also the need to address the time and scale parameters, that is, the 3D geometry of buildings in various time instances (4th dimension) at various levels of detail (LoDs - 5th dimension). This paper describes and proposes solutions for technical aspects that need to be addressed for the 5D modelling pipeline. Such solutions include the creation of a 3D model, the application of a selective modelling procedure between various time instances and at various LoDs, enriched with cadastral and other spatial data, and a procedural modelling approach for the representation of the inner parts of the buildings. The methodology is based on automatic change detection algorithms for spatial-temporal analysis of the changes that took place in subsequent time periods, using dense image matching and structure from motion algorithms. The selective modelling approach allows a detailed modelling only for the areas where spatial changes are detected. The procedural modelling techniques use programming languages for the textual semantic description of a building; they require the modeller to describe its part-to-whole relationships. Finally, a 5D viewer is developed, in order to tackle existing limitations that accompany the use of global systems, such as the Google Earth or the Google Maps, as visualization software. An application based on the proposed methodology in an urban area is presented and it provides satisfactory results.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
As many as three iterated statistical model deletion procedures were considered for an experiment. Population model coefficients were chosen to simulate a saturated 2 to the 4th power experiment having an unfavorable distribution of parameter values. Using random number studies, three model selection strategies were developed, namely, (1) a strategy to be used in anticipation of large coefficients of variation, approximately 65 percent, (2) a strategy to be sued in anticipation of small coefficients of variation, 4 percent or less, and (3) a security regret strategy to be used in the absence of such prior knowledge.
Computer modeling of lung cancer diagnosis-to-treatment process
Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick
2015-01-01
We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181
Thomas, D.L.; Johnson, D.; Griffith, B.
2006-01-01
Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Čolović, Jelena; Rmandić, Milena; Malenović, Anđelija
2018-05-17
Numerous stationary phases have been developed with the aim to provide desired performances during chromatographic analysis of the basic solutes in their protonated form. In this work, the procedure for the characterization of bonded stationary phase performance, when both qualitative and quantitative chromatographic factors were varied in chaotropic chromatography, was proposed. Risperidone and its three impurities were selected as model substances, while acetonitrile content in the mobile phase (20-30%), the pH of the aqueous phase (3.00-5.00), the content of chaotropic agents in the aqueous phase (10-100 mM), type of chaotropic agent (NaClO 4 , CF 3 COONa), and stationary phase type (Zorbax Eclipse XDB, Zorbax Extend) were studied as chromatographic factors. The proposed procedure implies the combination of D-optimal experimental design, indirect modeling, and polynomial-modified Gaussian model, while grid point search method was selected for the final choice of the experimental conditions which lead to the best possible stationary phase performance for basic solutes. Good agreement between experimentally obtained chromatogram and simulated chromatogram for chosen experimental conditions (25% acetonitrile, 75 mM of NaClO 4 , pH 4.00 on Zorbax Eclipse XDB column) confirmed the applicability of the proposed procedure. The additional point was selected for the verification of proposed procedure ability to distinguish changes in solutes' elution order. Simulated chromatogram for 21.5% acetonitrile, 85 mM of NaClO 4 , pH 5.00 on Zorbax Eclipse XDB column was in line with experimental data. Furthermore, the values of left and right peak half-widths obtained from indirect modeling were used in order to evaluate performances of differently modified stationary phases applying a half-width plots approach. The results from half-width plot approach as well as from the proposed procedure indicate higher efficiency and better separation performance of the stationary phase extra densely bonded and double end-capped with trimethylsilyl group than the stationary phase with the combination of end-capping and bidentate silane bonding for chromatographic analysis of basic solutes in RP-HPLC systems with chaotropic agents. Graphical abstract ᅟ.
O'Malley, A James; Cotterill, Philip; Schermerhorn, Marc L; Landon, Bruce E
2011-12-01
When 2 treatment approaches are available, there are likely to be unmeasured confounders that influence choice of procedure, which complicates estimation of the causal effect of treatment on outcomes using observational data. To estimate the effect of endovascular (endo) versus open surgical (open) repair, including possible modification by institutional volume, on survival after treatment for abdominal aortic aneurysm, accounting for observed and unobserved confounding variables. Observational study of data from the Medicare program using a joint model of treatment selection and survival given treatment to estimate the effects of type of surgery and institutional volume on survival. We studied 61,414 eligible repairs of intact abdominal aortic aneurysms during 2001 to 2004. The outcome, perioperative death, is defined as in-hospital death or death within 30 days of operation. The key predictors are use of endo, transformed endo and open volume, and endo-volume interactions. There is strong evidence of nonrandom selection of treatment with potential confounding variables including institutional volume and procedure date, variables not typically adjusted for in clinical trials. The best fitting model included heterogeneous transformations of endo volume for endo cases and open volume for open cases as predictors. Consistent with our hypothesis, accounting for unmeasured selection reduced the mortality benefit of endo. The effect of endo versus open surgery varies nonlinearly with endo and open volume. Accounting for institutional experience and unmeasured selection enables better decision-making by physicians making treatment referrals, investigators evaluating treatments, and policy makers.
ERIC Educational Resources Information Center
Hoover, H. D.; Plake, Barbara
The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
45 CFR 1217.4 - Selection procedure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Selection procedure. 1217.4 Section 1217.4 Public... VISTA VOLUNTEER LEADER § 1217.4 Selection procedure. (a) Nomination. Candidates may be nominated in... Director's review. (b) Selection. VISTA volunteer leaders will be selected by the Regional Director (or his...
Fukuda, Haruhisa; Kuroki, Manabu
2016-03-01
To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.
Selecting Statistical Procedures for Quality Control Planning Based on Risk Management.
Yago, Martín; Alcover, Silvia
2016-07-01
According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management. © 2016 American Association for Clinical Chemistry.
Beland, Michael D; Sternick, Laura A; Baird, Grayson L; Dupuy, Damian E; Cronan, John J; Mayo-Smith, William W
2016-04-01
Selection of the most appropriate modality for image guidance is essential for procedural success. We identified specific factors contributing to failure of ultrasound-guided procedures that were subsequently performed using CT guidance. This single-center, retrospective study included 164 patients who underwent a CT-guided biopsy, aspiration/drainage, or ablation after initially having the same procedure attempted unsuccessfully with ultrasound guidance. Review of the procedure images, reports, biopsy results, and clinical follow-up was performed and the reasons for inability to perform the procedure with ultrasound guidance were recorded. Patient cross-sectional area and depth to target were calculated. Differences in area and depth were compared using general linear modeling. Depth as a predictor of an unfavorable body habitus designation was modeled using logistic regression. US guidance was successful in the vast majority of cases (97%). Of the 164 procedures, there were 92 (56%) biopsies, 63 (38%) aspirations/drainages, and 9 (5%) ablations. The most common reason for procedure failure was poor acoustic window (83/164, 51%). Other reasons included target lesion being poorly discerned from adjacent tissue (61/164, 37%), adjacent bowel gas (34/164, 21%), body habitus (27/164, 16%), and gas-containing collection (22/164, 13%). Within the biopsy subgroup, patients for whom body habitus was a limiting factor were found to have on average a larger cross-sectional area and lesion depth relative to patients whose body habitus was not a complicating factor (p < 0.0001 and p = 0.0009). Poor acoustic window was the most common reason for procedural failure with ultrasound guidance. In addition, as lesion depth increased, the odds that body habitus would limit the procedure also increased. If preliminary imaging suggests a limited sonographic window, particularly for deeper lesions, proceeding directly to CT guidance should be considered.
Methodology for the evaluation of vascular surgery manpower in France.
Berger, L; Mace, J M; Ricco, J B; Saporta, G
2013-01-01
The French population is growing and ageing. It is expected to increase by 2.7% by 2020, and the number of individuals over 65 years of age is expected to increase by 3.3 million, a 33% increase, between 2005 and 2020. As the number of vascular surgery procedures is closely associated with the age of a population, it is anticipated that there will be a significant increase in the workload of vascular surgeons. A model is presented to predict changes in vascular surgery activity according to population ageing, including other parameters that could affect workload evolution. Three types of arterial procedures were studied: infrarenal abdominal aortic aneurysm (AAA) surgery, peripheral arterial occlusive disease (PAOD) procedures and carotid artery (CEA) procedures. Data were selected and extracted from the national PMSI (Medical Information System Program) database. Data obtained from 2000 were used to predict data based on an ageing population for 2008. From this model, a weighted index was defined for each group by comparing expected and observed workloads. According to the model, over this 8-year period, there was an overall increase in vascular procedures of 52.2%, with an increase of 89% in PAOD procedures. Between 2000 and 2009, the total increase was 58.0%, with 3.9% for AAA procedures, 101.7% for PAOD procedures and 13.2% for CEA procedures. The weighted model based on an ageing population and corrected by a weighted factor predicted this increase. This weighted model is able to predict the workload of vascular surgeons over the coming years. An ageing population and other factors could result in a significant increase in demand for vascular surgical services. Copyright © 2012 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.
Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.
1997-03-01
There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.
A Neurobiological Theory of Automaticity in Perceptual Categorization
ERIC Educational Resources Information Center
Ashby, F. Gregory; Ennis, John M.; Spiering, Brian J.
2007-01-01
A biologically detailed computational model is described of how categorization judgments become automatic in tasks that depend on procedural learning. The model assumes 2 neural pathways from sensory association cortex to the premotor area that mediates response selection. A longer and slower path projects to the premotor area via the striatum,…
Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto
2010-01-11
In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.
VARIABLE SELECTION IN NONPARAMETRIC ADDITIVE MODELS
Huang, Jian; Horowitz, Joel L.; Wei, Fengrong
2010-01-01
We consider a nonparametric additive model of a conditional mean function in which the number of variables and additive components may be larger than the sample size but the number of nonzero additive components is “small” relative to the sample size. The statistical problem is to determine which additive components are nonzero. The additive components are approximated by truncated series expansions with B-spline bases. With this approximation, the problem of component selection becomes that of selecting the groups of coefficients in the expansion. We apply the adaptive group Lasso to select nonzero components, using the group Lasso to obtain an initial estimator and reduce the dimension of the problem. We give conditions under which the group Lasso selects a model whose number of components is comparable with the underlying model, and the adaptive group Lasso selects the nonzero components correctly with probability approaching one as the sample size increases and achieves the optimal rate of convergence. The results of Monte Carlo experiments show that the adaptive group Lasso procedure works well with samples of moderate size. A data example is used to illustrate the application of the proposed method. PMID:21127739
Airport Landside - Volume III : ALSIM Calibration and Validation.
DOT National Transportation Integrated Search
1982-06-01
This volume discusses calibration and validation procedures applied to the Airport Landside Simulation Model (ALSIM), using data obtained at Miami, Denver and LaGuardia Airports. Criteria for the selection of a validation methodology are described. T...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
Conceptual Modeling via Logic Programming
1990-01-01
Define User Interface and Query Language L i1W= Ltl k.l 4. Define Procedures for Specifying Output S . Select Logic Programming Language 6. Develop ...baseline s change model. sessions and baselines. It was changed 6. Develop Methodology for C 31 Users. considerably with the advent of the window This...Model Development : Implica- for Conceptual Modeling Via Logic tions for Communications of a Cognitive Programming. Marina del Rey, Calif.: Analysis of
Semisupervised Clustering by Iterative Partition and Regression with Neuroscience Applications
Qian, Guoqi; Wu, Yuehua; Ferrari, Davide; Qiao, Puxue; Hollande, Frédéric
2016-01-01
Regression clustering is a mixture of unsupervised and supervised statistical learning and data mining method which is found in a wide range of applications including artificial intelligence and neuroscience. It performs unsupervised learning when it clusters the data according to their respective unobserved regression hyperplanes. The method also performs supervised learning when it fits regression hyperplanes to the corresponding data clusters. Applying regression clustering in practice requires means of determining the underlying number of clusters in the data, finding the cluster label of each data point, and estimating the regression coefficients of the model. In this paper, we review the estimation and selection issues in regression clustering with regard to the least squares and robust statistical methods. We also provide a model selection based technique to determine the number of regression clusters underlying the data. We further develop a computing procedure for regression clustering estimation and selection. Finally, simulation studies are presented for assessing the procedure, together with analyzing a real data set on RGB cell marking in neuroscience to illustrate and interpret the method. PMID:27212939
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
An affine projection algorithm using grouping selection of input vectors
NASA Astrophysics Data System (ADS)
Shin, JaeWook; Kong, NamWoong; Park, PooGyeon
2011-10-01
This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.
2015-11-24
This final rule implements a new Medicare Part A and B payment model under section 1115A of the Social Security Act, called the Comprehensive Care for Joint Replacement (CJR) model, in which acute care hospitals in certain selected geographic areas will receive retrospective bundled payments for episodes of care for lower extremity joint replacement (LEJR) or reattachment of a lower extremity. All related care within 90 days of hospital discharge from the joint replacement procedure will be included in the episode of care. We believe this model will further our goals in improving the efficiency and quality of care for Medicare beneficiaries with these common medical procedures.
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
Delgado-García, José M; Gruart, Agnès
2008-12-01
The availability of transgenic mice mimicking selective human neurodegenerative and psychiatric disorders calls for new electrophysiological and microstimulation techniques capable of being applied in vivo in this species. In this article, we will concentrate on experiments and techniques developed in our laboratory during the past few years. Thus we have developed different techniques for the study of learning and memory capabilities of wild-type and transgenic mice with deficits in cognitive functions, using classical conditioning procedures. These techniques include different trace (tone/SHOCK and shock/SHOCK) conditioning procedures ? that is, a classical conditioning task involving the cerebral cortex, including the hippocampus. We have also developed implantation and recording techniques for evoking long-term potentiation (LTP) in behaving mice and for recording the evolution of field excitatory postsynaptic potentials (fEPSP) evoked in the hippocampal CA1 area by the electrical stimulation of the commissural/Schaffer collateral pathway across conditioning sessions. Computer programs have also been developed to quantify the appearance and evolution of eyelid conditioned responses and the slope of evoked fEPSPs. According to the present results, the in vivo recording of the electrical activity of selected hippocampal sites during classical conditioning of eyelid responses appears to be a suitable experimental procedure for studying learning capabilities in genetically modified mice, and an excellent model for the study of selected neuropsychiatric disorders compromising cerebral cortex functioning.
DOT National Transportation Integrated Search
2002-01-01
This Guidebook provides an overview of procedures for consultant selection. The local agencies that intend to request federal and state funds for reimbursement of consultant services should follow specific selection and contracting procedures. These ...
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
ERIC Educational Resources Information Center
Ho, Tsung-Han
2010-01-01
Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…
Xu, G; Hughes-Oliver, J M; Brooks, J D; Yeatts, J L; Baynes, R E
2013-01-01
Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].
Durner, George M.; Amstrup, Steven C.; Nielson, Ryan M.; McDonald, Trent; Huzurbazar, Snehalata
2004-01-01
Polar bears (Ursus maritimus) depend on ice-covered seas to satisfy life history requirements. Modern threats to polar bears include oil spills in the marine environment and changes in ice composition resulting from climate change. Managers need practical models that explain the distribution of bears in order to assess the impacts of these threats. We explored the use of discrete choice models to describe habitat selection by female polar bears in the Beaufort Sea. Using stepwise procedures we generated resource selection models of habitat use. Sea ice characteristics and ocean depths at known polar bear locations were compared to the same features at randomly selected locations. Models generated for each of four seasons confirmed complexities of habitat use by polar bears and their response to numerous factors. Bears preferred shallow water areas where different ice types intersected. Variation among seasons was reflected mainly in differential selection of total ice concentration, ice stages, floe sizes, and their interactions. Distance to the nearest ice interface was a significant term in models for three seasons. Water depth was selected as a significant term in all seasons, possibly reflecting higher productivity in shallow water areas. Preliminary tests indicate seasonal models can predict polar bear distribution based on prior sea ice data.
48 CFR 570.305 - Two-phase design-build selection procedures.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...
48 CFR 570.305 - Two-phase design-build selection procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...
48 CFR 570.305 - Two-phase design-build selection procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...
48 CFR 570.305 - Two-phase design-build selection procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 4 2011-10-01 2011-10-01 false Two-phase design-build...-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if the contracting officer uses the two-phase design-build selection procedures authorized by 570...
Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples
NASA Technical Reports Server (NTRS)
Ratnatunga, Kavan U.; Casertano, Stefano
1991-01-01
A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.
Yago, Martín
2017-05-01
QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results. A statistical model was used to construct charts for the 1 ks and X /χ 2 rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results. 1 ks Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. X /χ 2 rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance. Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision. © 2017 American Association for Clinical Chemistry.
ERIC Educational Resources Information Center
Aughinbaugh, Lorine A.; And Others
Four products were developed during the second year of the Extended Opportunity Programs and Services (EOPS) cost effectiveness study for California community colleges. This project report presents: (1) a revised cost analysis form for state-level reporting of institutional program effectiveness data and per-student costs by EOPS program category…
A survey of variable selection methods in two Chinese epidemiology journals
2010-01-01
Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163) via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44%) used stepwise procedures, 89 (41%) tested individual regression coefficients, but 33 (15%) did not mention how variables were selected. Sixty percent (58/97) of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals. PMID:20920252
Development of a Prototype Decision Support System to Manage the Air Force Alternative Care Program
1990-09-01
development model was selected to structure the development process. Since it is necessary to ensure...uncertainty. Furthermore, the SDLC model provides a specific framework "by which an application is conceived, developed , and implemented" (Davis and Olson...associated with the automation of the manual ACP procedures. The SDLC Model has three stages: (1) definition, (2) development , and (3) installation
Sun, Yanqing; Sun, Liuquan; Zhou, Jie
2013-07-01
This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.
Storm Duration and Antecedent Moisture Conditions for Flood Discharge Estimation
DOT National Transportation Integrated Search
2003-11-01
Design flows estimated by flood hydrograph simulation can be reasonably accurate or greatly in error, depending upon the modeling procedures and inputs selected. The objectives of this research project were (1) to determine which combinations of mode...
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
McLeod, Helen; Cox, Ben F; Robertson, James; Duncan, Robyn; Matthew, Shona; Bhat, Raj; Barclay, Avril; Anwar, J; Wilkinson, Tracey; Melzer, Andreas; Houston, J Graeme
2017-09-01
The purpose of this investigation was to evaluate human Thiel-embalmed cadavers with the addition of extracorporeal driven ante-grade pulsatile flow in the aorta as a model for simulation training in interventional techniques and endovascular device testing. Three human cadavers embalmed according to the method of Thiel were selected. Extracorporeal pulsatile ante-grade flow of 2.5 L per min was delivered directly into the aorta of the cadavers via a surgically placed connection. During perfusion, aortic pressure and temperature were recorded and optimized for physiologically similar parameters. Pre- and post-procedure CT imaging was conducted to plan and follow up thoracic and abdominal endovascular aortic repair as it would be in a clinical scenario. Thoracic endovascular aortic repair (TEVAR) and endovascular abdominal repair (EVAR) procedures were conducted in simulation of a clinical case, under fluoroscopic guidance with a multidisciplinary team present. The Thiel cadaveric aortic perfusion model provided pulsatile ante-grade flow, with pressure and temperature, sufficient to conduct a realistic simulation of TEVAR and EVAR procedures. Fluoroscopic imaging provided guidance during the intervention. Pre- and post-procedure CT imaging facilitated planning and follow-up evaluation of the procedure. The human Thiel-embalmed cadavers with the addition of extracorporeal flow within the aorta offer an anatomically appropriate, physiologically similar robust model to simulate aortic endovascular procedures, with potential applications in interventional radiology training and medical device testing as a pre-clinical model.
Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.
Saucier, Gerard
2009-10-01
Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model
Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.
2018-01-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
Hanrahan, Kirsten; McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W Nick; Zimmerman, M Bridget; Ersig, Anne L
2012-10-01
This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children's responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, titled Children, Parents and Distraction, is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure.
MnDOT thin whitetopping selection procedures : final report.
DOT National Transportation Integrated Search
2017-06-01
This report provides an integrated selection procedure for evaluating whether an existing hot-mix asphalt (HMA) pavement is an appropriate candidate for a bonded concrete overlay of asphalt (BCOA). The selection procedure includes (1) a desk review, ...
Gurm, Hitinder S.; Kooiman, Judith; LaLonde, Thomas; Grines, Cindy; Share, David; Seth, Milan
2014-01-01
Background Transfusion is a common complication of Percutaneous Coronary Intervention (PCI) and is associated with adverse short and long term outcomes. There is no risk model for identifying patients most likely to receive transfusion after PCI. The objective of our study was to develop and validate a tool for predicting receipt of blood transfusion in patients undergoing contemporary PCI. Methods Random forest models were developed utilizing 45 pre-procedural clinical and laboratory variables to estimate the receipt of transfusion in patients undergoing PCI. The most influential variables were selected for inclusion in an abbreviated model. Model performance estimating transfusion was evaluated in an independent validation dataset using area under the ROC curve (AUC), with net reclassification improvement (NRI) used to compare full and reduced model prediction after grouping in low, intermediate, and high risk categories. The impact of procedural anticoagulation on observed versus predicted transfusion rates were assessed for the different risk categories. Results Our study cohort was comprised of 103,294 PCI procedures performed at 46 hospitals between July 2009 through December 2012 in Michigan of which 72,328 (70%) were randomly selected for training the models, and 30,966 (30%) for validation. The models demonstrated excellent calibration and discrimination (AUC: full model = 0.888 (95% CI 0.877–0.899), reduced model AUC = 0.880 (95% CI, 0.868–0.892), p for difference 0.003, NRI = 2.77%, p = 0.007). Procedural anticoagulation and radial access significantly influenced transfusion rates in the intermediate and high risk patients but no clinically relevant impact was noted in low risk patients, who made up 70% of the total cohort. Conclusions The risk of transfusion among patients undergoing PCI can be reliably calculated using a novel easy to use computational tool (https://bmc2.org/calculators/transfusion). This risk prediction algorithm may prove useful for both bed side clinical decision making and risk adjustment for assessment of quality. PMID:24816645
Using machine learning for sequence-level automated MRI protocol selection in neuroradiology.
Brown, Andrew D; Marotta, Thomas R
2018-05-01
Incorrect imaging protocol selection can lead to important clinical findings being missed, contributing to both wasted health care resources and patient harm. We present a machine learning method for analyzing the unstructured text of clinical indications and patient demographics from magnetic resonance imaging (MRI) orders to automatically protocol MRI procedures at the sequence level. We compared 3 machine learning models - support vector machine, gradient boosting machine, and random forest - to a baseline model that predicted the most common protocol for all observations in our test set. The gradient boosting machine model significantly outperformed the baseline and demonstrated the best performance of the 3 models in terms of accuracy (95%), precision (86%), recall (80%), and Hamming loss (0.0487). This demonstrates the feasibility of automating sequence selection by applying machine learning to MRI orders. Automated sequence selection has important safety, quality, and financial implications and may facilitate improvements in the quality and safety of medical imaging service delivery.
Optimal Sensor Selection for Health Monitoring Systems
NASA Technical Reports Server (NTRS)
Santi, L. Michael; Sowers, T. Shane; Aguilar, Robert B.
2005-01-01
Sensor data are the basis for performance and health assessment of most complex systems. Careful selection and implementation of sensors is critical to enable high fidelity system health assessment. A model-based procedure that systematically selects an optimal sensor suite for overall health assessment of a designated host system is described. This procedure, termed the Systematic Sensor Selection Strategy (S4), was developed at NASA John H. Glenn Research Center in order to enhance design phase planning and preparations for in-space propulsion health management systems (HMS). Information and capabilities required to utilize the S4 approach in support of design phase development of robust health diagnostics are outlined. A merit metric that quantifies diagnostic performance and overall risk reduction potential of individual sensor suites is introduced. The conceptual foundation for this merit metric is presented and the algorithmic organization of the S4 optimization process is described. Representative results from S4 analyses of a boost stage rocket engine previously under development as part of NASA's Next Generation Launch Technology (NGLT) program are presented.
Phillips, Steven P.; Belitz, Kenneth
1991-01-01
The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.
Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.
ERIC Educational Resources Information Center
Tarquinio, Nancy; And Others
1990-01-01
Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)
Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K
2001-01-01
When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.
Code of Federal Regulations, 2013 CFR
2013-07-01
...: Relationship between use of selection procedures and discrimination. 60-3.3 Section 60-3.3 Public Contracts and... PROGRAMS, EQUAL EMPLOYMENT OPPORTUNITY, DEPARTMENT OF LABOR 3-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.3 Discrimination defined: Relationship between use of selection...
Luo, W; Chen, M; Chen, A; Dong, W; Hou, X; Pu, B
2015-04-01
To isolate lactic acid bacteria (LAB) from pao cai, a Chinese traditional fermented vegetable, with outstanding inhibitory activity against Salmonella inoculated on fresh-cut apple, using a modelling method. Four kinds of pao cai were selected. A total of 122 isolates exhibited typical LAB characteristics: Gram-positive and catalase negative, among which 104 (85·24%) colonies showed antibacterial activity against Salmonella by the well diffusion assay. Four colonies showing maximum antibacterial radius against Salmonella were selected to co-inoculate with Salmonella on fresh-cut apple and stored at 10°C, further identified as three strains of Lactobacillus plantarum and one strain of Lactobacillus brevis by 16s rRNA gene sequence analysis. The modified Gompertz model was employed to analyse the growth of the micro-organisms on apple wedges. Two of the four selected strains showed antagonistic activity against Salmonella on fresh-cut apple, one of which, RD1, exhibited best inhibitory activity (Salmonella were greatly inhibited when co-inoculated with RD1 at 10°C at 168 h). No deterioration in odour or appearance of the apple piece was observed by the triangle test when fresh-cut apple was inoculated with RD1. The mathematical modelling method is essential to select LAB with outstanding inhibitory activity against Salmonella associated with fresh-cut apple. LAB RD1 holds promise for the preservation of fresh-cut apple. This study provided a new method on fresh-cut product preservation. Besides, to make the LAB isolating procedure a more correct one, this study first added the mathematical modelling method to the isolating procedure. © 2014 The Society for Applied Microbiology.
Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing
ERIC Educational Resources Information Center
Deng, Hui; Ansley, Timothy; Chang, Hua-Hua
2010-01-01
In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…
2006-11-26
with controlled micro and nanostructure for highly selective, high sensitivity assays. The process was modeled and a procedure for fabricating SERS...small volumes with controlled micro and nanostructure for highly selective, high sensitivity assays. We proved the feasibility of the technique and...films templated by colloidal crystals. The control over the film structure allowed optimizing their performance for potential sensor applications. The
Symbolic Model of Perception in Dynamic 3D Environments
2006-11-01
can retrieve memories , work on goals, recognize visual or aural percepts, and perform actions. ACT-R has been selected for the current...types of memory . Procedural memory is the store of condition- action productions that are selected and executed by the core production system...a declarative memory chunk that is made available to the core production system through the vision module . 4 The vision module has been
2010-09-01
matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
NASA Technical Reports Server (NTRS)
Sharma, M. M.
1979-01-01
An assessment and determination of technology requirements for developing a demonstration model to evaluate feasibility of practical cryogenic liquid level, pressure, and temperature sensors is presented. The construction of a demonstration model to measure characteristics of the selected sensor and to develop test procedures are discussed as well as the development of an appropriate electronic subsystem to operate the sensors.
A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.
Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger
2018-04-19
Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.
An application of model-fitting procedures for marginal structural models.
Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B
2005-08-15
Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.
Code of Federal Regulations, 2010 CFR
2010-07-01
... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...
Code of Federal Regulations, 2014 CFR
2014-07-01
... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...
Code of Federal Regulations, 2012 CFR
2012-07-01
... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...
Code of Federal Regulations, 2011 CFR
2011-07-01
... employment or membership opportunities of members of any race, sex, or ethnic group will be considered to be... selection procedures and suitable alternative methods of using the selection procedure which have as little...
Model weights and the foundations of multimodel inference
Link, W.A.; Barker, R.J.
2006-01-01
Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.
A selective-update affine projection algorithm with selective input vectors
NASA Astrophysics Data System (ADS)
Kong, NamWoong; Shin, JaeWook; Park, PooGyeon
2011-10-01
This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.
Procedures for generation and reduction of linear models of a turbofan engine
NASA Technical Reports Server (NTRS)
Seldner, K.; Cwynar, D. S.
1978-01-01
A real time hybrid simulation of the Pratt & Whitney F100-PW-F100 turbofan engine was used for linear-model generation. The linear models were used to analyze the effect of disturbances about an operating point on the dynamic performance of the engine. A procedure that disturbs, samples, and records the state and control variables was developed. For large systems, such as the F100 engine, the state vector is large and may contain high-frequency information not required for control. This, reducing the full-state to a reduced-order model may be a practicable approach to simplifying the control design. A reduction technique was developed to generate reduced-order models. Selected linear and nonlinear output responses to exhaust-nozzle area and main-burner fuel flow disturbances are presented for comparison.
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins. PMID:27418910
Asghari, Mehdi Poursheikhali; Hayatshahi, Sayyed Hamed Sadat; Abdolmaleki, Parviz
2012-01-01
From both the structural and functional points of view, β-turns play important biological roles in proteins. In the present study, a novel two-stage hybrid procedure has been developed to identify β-turns in proteins. Binary logistic regression was initially used for the first time to select significant sequence parameters in identification of β-turns due to a re-substitution test procedure. Sequence parameters were consisted of 80 amino acid positional occurrences and 20 amino acid percentages in sequence. Among these parameters, the most significant ones which were selected by binary logistic regression model, were percentages of Gly, Ser and the occurrence of Asn in position i+2, respectively, in sequence. These significant parameters have the highest effect on the constitution of a β-turn sequence. A neural network model was then constructed and fed by the parameters selected by binary logistic regression to build a hybrid predictor. The networks have been trained and tested on a non-homologous dataset of 565 protein chains. With applying a nine fold cross-validation test on the dataset, the network reached an overall accuracy (Qtotal) of 74, which is comparable with results of the other β-turn prediction methods. In conclusion, this study proves that the parameter selection ability of binary logistic regression together with the prediction capability of neural networks lead to the development of more precise models for identifying β-turns in proteins.
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1986-01-01
An economic evaluation and planning procedure which assesses the effects of various policies on fixed satellite business ventures is described. The procedure is based on a stochastic financial simulation model, the Domsat II, which evaluates spacecraft reliability, market performance, and cost uncertainties. The application of the Domsat II model to the assessment of NASA's ion thrusters for on-orbit propulsion and GaAs solar cell technology is discussed. The effects of insurance rates and the self-insurance option on the financial performance of communication satellite business ventures are investigated. The selection of a transportation system for placing the satellites into GEO is analyzed.
A modified procedure for mixture-model clustering of regional geochemical data
Ellefsen, Karl J.; Smith, David B.; Horton, John D.
2014-01-01
A modified procedure is proposed for mixture-model clustering of regional-scale geochemical data. The key modification is the robust principal component transformation of the isometric log-ratio transforms of the element concentrations. This principal component transformation and the associated dimension reduction are applied before the data are clustered. The principal advantage of this modification is that it significantly improves the stability of the clustering. The principal disadvantage is that it requires subjective selection of the number of clusters and the number of principal components. To evaluate the efficacy of this modified procedure, it is applied to soil geochemical data that comprise 959 samples from the state of Colorado (USA) for which the concentrations of 44 elements are measured. The distributions of element concentrations that are derived from the mixture model and from the field samples are similar, indicating that the mixture model is a suitable representation of the transformed geochemical data. Each cluster and the associated distributions of the element concentrations are related to specific geologic and anthropogenic features. In this way, mixture model clustering facilitates interpretation of the regional geochemical data.
Results of the degradation kinetics project and describes a general approach for calculating and selecting representative half-life values from soil and aquatic transformation studies for risk assessment and exposure modeling purposes.
Parallax-Robust Surveillance Video Stitching
He, Botao; Yu, Shaohua
2015-01-01
This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
Simulation of unsteady flows by the DSMC macroscopic chemistry method
NASA Astrophysics Data System (ADS)
Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat
2009-03-01
In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.
Brown, Jeremiah R; MacKenzie, Todd A; Maddox, Thomas M; Fly, James; Tsai, Thomas T; Plomondon, Mary E; Nielson, Christopher D; Siew, Edward D; Resnic, Frederic S; Baker, Clifton R; Rumsfeld, John S; Matheny, Michael E
2015-12-11
Acute kidney injury (AKI) occurs frequently after cardiac catheterization and percutaneous coronary intervention. Although a clinical risk model exists for percutaneous coronary intervention, no models exist for both procedures, nor do existing models account for risk factors prior to the index admission. We aimed to develop such a model for use in prospective automated surveillance programs in the Veterans Health Administration. We collected data on all patients undergoing cardiac catheterization or percutaneous coronary intervention in the Veterans Health Administration from January 01, 2009 to September 30, 2013, excluding patients with chronic dialysis, end-stage renal disease, renal transplant, and missing pre- and postprocedural creatinine measurement. We used 4 AKI definitions in model development and included risk factors from up to 1 year prior to the procedure and at presentation. We developed our prediction models for postprocedural AKI using the least absolute shrinkage and selection operator (LASSO) and internally validated using bootstrapping. We developed models using 115 633 angiogram procedures and externally validated using 27 905 procedures from a New England cohort. Models had cross-validated C-statistics of 0.74 (95% CI: 0.74-0.75) for AKI, 0.83 (95% CI: 0.82-0.84) for AKIN2, 0.74 (95% CI: 0.74-0.75) for contrast-induced nephropathy, and 0.89 (95% CI: 0.87-0.90) for dialysis. We developed a robust, externally validated clinical prediction model for AKI following cardiac catheterization or percutaneous coronary intervention to automatically identify high-risk patients before and immediately after a procedure in the Veterans Health Administration. Work is ongoing to incorporate these models into routine clinical practice. © 2015 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
lazar: a modular predictive toxicology framework
Maunz, Andreas; Gütlein, Martin; Rautenberg, Micha; Vorgrimmler, David; Gebele, Denis; Helma, Christoph
2013-01-01
lazar (lazy structure–activity relationships) is a modular framework for predictive toxicology. Similar to the read across procedure in toxicological risk assessment, lazar creates local QSAR (quantitative structure–activity relationship) models for each compound to be predicted. Model developers can choose between a large variety of algorithms for descriptor calculation and selection, chemical similarity indices, and model building. This paper presents a high level description of the lazar framework and discusses the performance of example classification and regression models. PMID:23761761
29 CFR 1606.6 - Selection procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 4 2012-07-01 2012-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...
29 CFR 1606.6 - Selection procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 4 2011-07-01 2011-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...
29 CFR 1606.6 - Selection procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 4 2014-07-01 2014-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...
29 CFR 1606.6 - Selection procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 4 2010-07-01 2010-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...
29 CFR 1606.6 - Selection procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 4 2013-07-01 2013-07-01 false Selection procedures. 1606.6 Section 1606.6 Labor Regulations Relating to Labor (Continued) EQUAL EMPLOYMENT OPPORTUNITY COMMISSION GUIDELINES ON DISCRIMINATION... the use of the following selection procedures may be discriminatory on the basis of national origin...
Code of Federal Regulations, 2012 CFR
2012-07-01
... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...
Code of Federal Regulations, 2014 CFR
2014-07-01
... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...
Code of Federal Regulations, 2011 CFR
2011-07-01
... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...
Code of Federal Regulations, 2010 CFR
2010-07-01
... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...
Code of Federal Regulations, 2013 CFR
2013-07-01
... SELECTION PROCEDURES (1978) Appendix § 1607.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures... employment practices on grounds of race, color, religion, sex, or national origin. These guidelines have been...
Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis
Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas
2016-01-01
The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246
Kepler AutoRegressive Planet Search: Motivation & Methodology
NASA Astrophysics Data System (ADS)
Caceres, Gabriel; Feigelson, Eric; Jogesh Babu, G.; Bahamonde, Natalia; Bertin, Karine; Christen, Alejandra; Curé, Michel; Meza, Cristian
2015-08-01
The Kepler AutoRegressive Planet Search (KARPS) project uses statistical methodology associated with autoregressive (AR) processes to model Kepler lightcurves in order to improve exoplanet transit detection in systems with high stellar variability. We also introduce a planet-search algorithm to detect transits in time-series residuals after application of the AR models. One of the main obstacles in detecting faint planetary transits is the intrinsic stellar variability of the host star. The variability displayed by many stars may have autoregressive properties, wherein later flux values are correlated with previous ones in some manner. Auto-Regressive Moving-Average (ARMA) models, Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH), and related models are flexible, phenomenological methods used with great success to model stochastic temporal behaviors in many fields of study, particularly econometrics. Powerful statistical methods are implemented in the public statistical software environment R and its many packages. Modeling involves maximum likelihood fitting, model selection, and residual analysis. These techniques provide a useful framework to model stellar variability and are used in KARPS with the objective of reducing stellar noise to enhance opportunities to find as-yet-undiscovered planets. Our analysis procedure consisting of three steps: pre-processing of the data to remove discontinuities, gaps and outliers; ARMA-type model selection and fitting; and transit signal search of the residuals using a new Transit Comb Filter (TCF) that replaces traditional box-finding algorithms. We apply the procedures to simulated Kepler-like time series with known stellar and planetary signals to evaluate the effectiveness of the KARPS procedures. The ARMA-type modeling is effective at reducing stellar noise, but also reduces and transforms the transit signal into ingress/egress spikes. A periodogram based on the TCF is constructed to concentrate the signal of these periodic spikes. When a periodic transit is found, the model is displayed on a standard period-folded averaged light curve. We also illustrate the efficient coding in R.
Major System Source Evaluation and Selection Procedures.
1987-04-02
A-RIBI I" MAJOR SYSTEM SOURCE EVALUATION AND SELECTION PROCEDURES / (U) BUSINESS MANAGEMENT RESEARCH ASSOCIATES INC ARLINGTON VA 02 APR 6? ORMC-5...BRMC-85-5142-1 0 I- MAJOR SYSTEM SOURCE EVALUATION AND SELECTION PROCEDURES o I Business Management Research Associates, Inc. 1911 Jefferson Davis...FORCE SOURCE EVALUATION AND SELECTI ON PROCEDURES Prepared by Business Management Research Associates, Inc., 1911 Jefferson Davis Highway, Arlington
Code of Federal Regulations, 2014 CFR
2014-07-01
...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...
Code of Federal Regulations, 2013 CFR
2013-07-01
...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...
Code of Federal Regulations, 2010 CFR
2010-07-01
...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...
Code of Federal Regulations, 2012 CFR
2012-07-01
...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...
Code of Federal Regulations, 2011 CFR
2011-07-01
...-UNIFORM GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) Appendix to Part 60-3 § 60-3.18 Citations. The official title of these guidelines is “Uniform Guidelines on Employee Selection Procedures (1978)”. The Uniform Guidelines on Employee Selection Procedures (1978) are intended to establish a uniform Federal...
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.
Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R
2018-05-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.
Response moderation models for conditional dependence between response time and response accuracy.
Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan
2017-05-01
It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.
Gain selection method and model for coupled propulsion and airframe systems
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1982-01-01
A longitudinal model is formulated for an advanced fighter from three subsystem models: the inlet, the engine, and the airframe. Notable interaction is found in the coupled system. A procedure, based on eigenvalue sensitivities, is presented which indicates the importance of the feedback gains to the optimal solution. This allows ineffectual gains to be eliminated; thus, hardware and expense may be saved in the realization of the physical controller.
Methods for apportioning sources of ambient particulate matter (PM) using the positive matrix factorization (PMF) algorithm are reviewed. Numerous procedural decisions must be made and algorithmic parameters selected when analyzing PM data with PMF. However, few publications docu...
A two-phased fuzzy decision making procedure for IT supplier selection
NASA Astrophysics Data System (ADS)
Shohaimay, Fairuz; Ramli, Nazirah; Mohamed, Siti Rosiah; Mohd, Ainun Hafizah
2013-09-01
In many studies on fuzzy decision making, linguistic terms are usually represented by corresponding fixed triangular or trapezoidal fuzzy numbers. However, the fixed fuzzy numbers used in decision making process may not explain the actual respondents' opinions. Hence, a two-phased fuzzy decision making procedure is proposed. First, triangular fuzzy numbers were built based on respondents' opinions on the appropriate range (0-100) for each seven-scale linguistic terms. Then, the fuzzy numbers were integrated into fuzzy decision making model. The applicability of the proposed method is demonstrated in a case study of supplier selection in Information Technology (IT) department. The results produced via the developed fuzzy numbers were consistent with the results obtained using fixed fuzzy numbers. However, with different set of fuzzy numbers based on respondents, there is a difference in the ranking of suppliers based on criterion X1 (background of supplier). Hopefully the proposed model which incorporates fuzzy numbers based on respondents will provide a more significant meaning towards future decision making.
Exposure calculation code module for reactor core analysis: BURNER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Cunningham, G.W.
1979-02-01
The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less
NASA Astrophysics Data System (ADS)
Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram
2018-05-01
The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.
Inter-rater reliability of select physical examination procedures in patients with neck pain.
Hanney, William J; George, Steven Z; Kolber, Morey J; Young, Ian; Salamh, Paul A; Cleland, Joshua A
2014-07-01
This study evaluated the inter-rater reliability of select examination procedures in patients with neck pain (NP) conducted over a 24- to 48-h period. Twenty-two patients with mechanical NP participated in a standardized examination. One examiner performed standardized examination procedures and a second blinded examiner repeated the procedures 24-48 h later with no treatment administered between examinations. Inter-rater reliability was calculated with the Cohen Kappa and weighted Kappa for ordinal data while continuous level data were calculated using an intraclass correlation coefficient model 2,1 (ICC2,1). Coefficients for categorical variables ranged from poor to moderate agreement (-0.22 to 0.70 Kappa) and coefficients for continuous data ranged from slight to moderate (ICC2,1 0.28-0.74). The standard error of measurement for cervical range of motion ranged from 5.3° to 9.9° while the minimal detectable change ranged from 12.5° to 23.1°. This study is the first to report inter-rater reliability values for select components of the cervical examination in those patients with NP performed 24-48 h after the initial examination. There was considerably less reliability when compared to previous studies, thus clinicians should consider how the passage of time may influence variability in examination findings over a 24- to 48-h period.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
NASA Astrophysics Data System (ADS)
Kamaruddin, Ainur Amira; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Ahmad, Wan Muhamad Amir W.
2014-07-01
Logistic regression analysis examines the influence of various factors on a dichotomous outcome by estimating the probability of the event's occurrence. Logistic regression, also called a logit model, is a statistical procedure used to model dichotomous outcomes. In the logit model the log odds of the dichotomous outcome is modeled as a linear combination of the predictor variables. The log odds ratio in logistic regression provides a description of the probabilistic relationship of the variables and the outcome. In conducting logistic regression, selection procedures are used in selecting important predictor variables, diagnostics are used to check that assumptions are valid which include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers and a test statistic is calculated to determine the aptness of the model. This study used the binary logistic regression model to investigate overweight and obesity among rural secondary school students on the basis of their demographics profile, medical history, diet and lifestyle. The results indicate that overweight and obesity of students are influenced by obesity in family and the interaction between a student's ethnicity and routine meals intake. The odds of a student being overweight and obese are higher for a student having a family history of obesity and for a non-Malay student who frequently takes routine meals as compared to a Malay student.
Surface water risk assessment of pesticides in Ethiopia.
Teklu, Berhan M; Adriaanse, Paulien I; Ter Horst, Mechteld M S; Deneer, John W; Van den Brink, Paul J
2015-03-01
Scenarios for future use in the pesticide registration procedure in Ethiopia were designed for 3 separate Ethiopian locations, which are aimed to be protective for the whole of Ethiopia. The scenarios estimate concentrations in surface water resulting from agricultural use of pesticides for a small stream and for two types of small ponds. Seven selected pesticides were selected since they were estimated to bear the highest risk to humans on the basis of volume of use, application rate and acute and chronic human toxicity, assuming exposure as a result of the consumption of surface water. Potential ecotoxicological risks were not considered as a selection criterion at this stage. Estimates of exposure concentrations in surface water were established using modelling software also applied in the EU registration procedure (PRZM and TOXSWA). Input variables included physico-chemical properties, and data such as crop calendars, irrigation schedules, meteorological information and detailed application data which were specifically tailored to the Ethiopian situation. The results indicate that for all the pesticides investigated the acute human risk resulting from the consumption of surface water is low to negligible, whereas agricultural use of chlorothalonil, deltamethrin, endosulfan and malathion in some crops may result in medium to high risk to aquatic species. The predicted environmental concentration estimates are based on procedures similar to procedures used at the EU level and in the USA. Addition of aquatic macrophytes as an ecotoxicological endpoint may constitute a welcome future addition to the risk assessment procedure. Implementation of the methods used for risk characterization constitutes a good step forward in the pesticide registration procedure in Ethiopia. Copyright © 2014 Elsevier B.V. All rights reserved.
[Selection of medical students : Measurement of cognitive abilities and psychosocial competencies].
Schwibbe, Anja; Lackamp, Janina; Knorr, Mirjana; Hissbach, Johanna; Kadmon, Martina; Hampe, Wolfgang
2018-02-01
The German Constitutional Court is currently reviewing whether the actual study admission process in medicine is compatible with the constitutional right of freedom of profession, since applicants without an excellent GPA usually have to wait for seven years. If the admission system is changed, politicians would like to increase the influence of psychosocial criteria on selection as specified by the Masterplan Medizinstudium 2020.What experiences have been made with the actual selection procedures? How could Situational Judgement Tests contribute to the validity of future selection procedures to German medical schools?High school GPA is the best predictor of study performance, but is more and more under discussion due to the lack of comparability between states and schools and the growing number of applicants with top grades. Aptitude and knowledge tests, especially in the natural sciences, show incremental validity in predicting study performance. The measurement of psychosocial competencies with traditional interviews shows rather low reliability and validity. The more reliable multiple mini-interviews are superior in predicting practical study performance. Situational judgement tests (SJTs) used abroad are regarded as reliable and valid; the correlation of a German SJT piloted in Hamburg with the multiple mini-interview is cautiously encouraging.A model proposed by the Medizinischer Fakultätentag and the Bundesvertretung der Medizinstudierenden considers these results. Student selection is proposed to be based on a combination of high school GPA (40%) and a cognitive test (40%) as well as an SJT (10%) and job experience (10%). Furthermore, the faculties still have the option to carry out specific selection procedures.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
Cervantes, Jose A; Costello, Collin M; Maarouf, Melody; McCrary, Hilary C; Zeitouni, Nathalie C
2017-09-01
A realistic model for the instruction of basic dermatologic procedural skills was developed, while simultaneously increasing medical student exposure to the field of dermatology. The primary purpose of the authors' study was to evaluate the utilization of a fresh-tissue cadaver model (FTCM) as a method for the instruction of common dermatologic procedures. The authors' secondary aim was to assess students' perceived clinical skills and overall perception of the field of dermatology after the lab. Nineteen first- and second-year medical students were pre- and post-tested on their ability to perform punch and excisional biopsies on a fresh-tissue cadaver. Students were then surveyed on their experience. Assessment of the cognitive knowledge gain and technical skills revealed a statistically significant improvement in all categories (p < .001). An analysis of the survey demonstrated that 78.9% were more interested in selecting dermatology as a career and 63.2% of participants were more likely to refer their future patients to a Mohs surgeon. An FTCM is a viable method for the instruction and training of dermatologic procedures. In addition, the authors conclude that an FTCM provides realistic instruction for common dermatologic procedures and enhances medical students' early exposure and interest in the field of dermatology.
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Saavedra, Pedro; And Others
Parameters and procedures for developing an error-prone model (EPM) to predict financial aid applicants who are likely to misreport on Basic Educational Opportunity Grant (BEOG) applications are introduced. Specifications to adapt these general parameters to secondary data analysis of the Validation, Edits, and Applications Processing Systems…
Saunders, A B; Keefe, L; Birch, S A; Wierzbicki, M A; Maitland, D J
2017-06-01
The purpose of this study was to evaluate a canine patent ductus arteriosus (PDA) model developed for practicing device placement and to determine practices and perceptions regarding transcatheter closure of PDA from the veterinary cardiology community. A silicone model was developed from images obtained from a dog with a PDA and device placement was performed with catheter equipment and a document camera to simulate fluoroscopy. A total of 36 individuals including 24 diplomates and 12 residents participated, and the feedback was obtained. The study included an initial questionnaire, practice with the model, observation of device placement using the model, and a follow-up questionnaire. A total of 92% of participants including 100% of residents indicated they did not have the opportunity to practice device placement before performing the procedure and obtained knowledge of the procedure from reading journal articles or observation. Participants indicated selecting the appropriate device size (30/36, 83%) and ensuring the device is appropriately positioned before release (18/36, 50%) as the most common areas of difficulty with device placement. Confidence level was higher after practicing with the model for residents when compared with diplomates and for participants that had performed 1-15 procedures when compared with those that had performed >15 procedures. These findings suggest those that have performed fewer procedures may benefit the most from practicing with a model. This preliminary study demonstrates the feasibility of a PDA model for practicing device placement and suggests that there is a potential benefit from providing additional training resources. Copyright © 2017 Elsevier B.V. All rights reserved.
48 CFR 6.102 - Use of competitive procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... procedure (see subpart 36.6 for procedures). (2) Competitive selection of basic and applied research and... nature identifying areas of research interest, including criteria for selecting proposals, and soliciting...
Nonlinear probabilistic finite element models of laminated composite shells
NASA Technical Reports Server (NTRS)
Engelstad, S. P.; Reddy, J. N.
1993-01-01
A probabilistic finite element analysis procedure for laminated composite shells has been developed. A total Lagrangian finite element formulation, employing a degenerated 3-D laminated composite shell with the full Green-Lagrange strains and first-order shear deformable kinematics, forms the modeling foundation. The first-order second-moment technique for probabilistic finite element analysis of random fields is employed and results are presented in the form of mean and variance of the structural response. The effects of material nonlinearity are included through the use of a rate-independent anisotropic plasticity formulation with the macroscopic point of view. Both ply-level and micromechanics-level random variables can be selected, the latter by means of the Aboudi micromechanics model. A number of sample problems are solved to verify the accuracy of the procedures developed and to quantify the variability of certain material type/structure combinations. Experimental data is compared in many cases, and the Monte Carlo simulation method is used to check the probabilistic results. In general, the procedure is quite effective in modeling the mean and variance response of the linear and nonlinear behavior of laminated composite shells.
McCarthy, Ann Marie; Kleiber, Charmaine; Ataman, Kaan; Street, W. Nick; Zimmerman, M. Bridget; Ersig, Anne L.
2012-01-01
This secondary data analysis used data mining methods to develop predictive models of child risk for distress during a healthcare procedure. Data used came from a study that predicted factors associated with children’s responses to an intravenous catheter insertion while parents provided distraction coaching. From the 255 items used in the primary study, 44 predictive items were identified through automatic feature selection and used to build support vector machine regression models. Models were validated using multiple cross-validation tests and by comparing variables identified as explanatory in the traditional versus support vector machine regression. Rule-based approaches were applied to the model outputs to identify overall risk for distress. A decision tree was then applied to evidence-based instructions for tailoring distraction to characteristics and preferences of the parent and child. The resulting decision support computer application, the Children, Parents and Distraction (CPaD), is being used in research. Future use will support practitioners in deciding the level and type of distraction intervention needed by a child undergoing a healthcare procedure. PMID:22805121
ERIC Educational Resources Information Center
Harrison, Gary, Ed.; Mirkes, Donna Z., Ed.
Intended for educators who direct federally funded model projects, the booklet provides a framework for special education product development. In "Making Media Decisions," G. Richman explores procedures for selecting the most appropriate medium to carry the message of a given product. The fundamental questions are addressed: what is the goal; who…
48 CFR 36.301 - Use of two-phase design-build selection procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Use of two-phase design-build selection procedures. 36.301 Section 36.301 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL CATEGORIES OF CONTRACTING CONSTRUCTION AND ARCHITECT-ENGINEER CONTRACTS Two-Phase Design-Build Selection Procedures 36.301...
9 CFR 592.450 - Procedures for selecting appeal samples.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 2 2013-01-01 2013-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...
9 CFR 592.450 - Procedures for selecting appeal samples.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 2 2011-01-01 2011-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...
9 CFR 592.450 - Procedures for selecting appeal samples.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 2 2014-01-01 2014-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...
9 CFR 592.450 - Procedures for selecting appeal samples.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 2 2012-01-01 2012-01-01 false Procedures for selecting appeal samples. 592.450 Section 592.450 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE EGG PRODUCTS INSPECTION VOLUNTARY INSPECTION OF EGG PRODUCTS Appeals § 592.450 Procedures for selecting appeal samples. (a)...
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Zhou, Hongyi; Skolnick, Jeffrey
2010-01-01
In this work, we develop a method called FTCOM for assessing the global quality of protein structural models for targets of medium and hard difficulty (remote homology) produced by structure prediction approaches such as threading or ab initio structure prediction. FTCOM requires the Cα coordinates of full length models and assesses model quality based on fragment comparison and a score derived from comparison of the model to top threading templates. On a set of 361 medium/hard targets, FTCOM was applied to and assessed for its ability to improve upon the results from the SP3, SPARKS, PROSPECTOR_3, and PRO-SP3-TASSER threading algorithms. The average TM-score improves by 5%–10% for the first selected model by the new method over models obtained by the original selection procedure in the respective threading methods. Moreover the number of foldable targets (TM-score ≥0.4) increases from least 7.6% for SP3 to 54% for SPARKS. Thus, FTCOM is a promising approach to template selection. PMID:20455261
Constructing exact perturbations of the standard cosmological models
NASA Astrophysics Data System (ADS)
Sopuerta, Carlos F.
1999-11-01
In this paper we show a procedure to construct cosmological models which, according to a covariant criterion, can be seen as exact (nonlinear) perturbations of the standard Friedmann-Lemaı⁁tre-Robertson-Walker (FLRW) cosmological models. The special properties of this procedure will allow us to select some of the characteristics of the models and also to study in depth their main geometrical and physical features. In particular, the models are conformally stationary, which means that they are compatible with the existence of isotropic radiation, and the observers that would measure this isotropy are rotating. Moreover, these models have two arbitrary functions (one of them is a complex function) which control their main properties, and in general they do not have any isometry. We study two examples, focusing on the case when the underlying FLRW models are flat dust models. In these examples we compare our results with those of the linearized theory of perturbations about a FLRW background.
A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.
Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe
2014-01-01
Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.
Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori
NASA Astrophysics Data System (ADS)
Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.
2017-02-01
Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.
MIRACAL: A mission radiation calculation program for analysis of lunar and interplanetary missions
NASA Technical Reports Server (NTRS)
Nealy, John E.; Striepe, Scott A.; Simonsen, Lisa C.
1992-01-01
A computational procedure and data base are developed for manned space exploration missions for which estimates are made for the energetic particle fluences encountered and the resulting dose equivalent incurred. The data base includes the following options: statistical or continuum model for ordinary solar proton events, selection of up to six large proton flare spectra, and galactic cosmic ray fluxes for elemental nuclei of charge numbers 1 through 92. The program requires an input trajectory definition information and specifications of optional parameters, which include desired spectral data and nominal shield thickness. The procedure may be implemented as an independent program or as a subroutine in trajectory codes. This code should be most useful in mission optimization and selection studies for which radiation exposure is of special importance.
Tang, Dalin; Yang, Chun; Geva, Tal; del Nido, Pedro J.
2010-01-01
Recent advances in medical imaging technology and computational modeling techniques are making it possible that patient-specific computational ventricle models be constructed and used to test surgical hypotheses and replace empirical and often risky clinical experimentation to examine the efficiency and suitability of various reconstructive procedures in diseased hearts. In this paper, we provide a brief review on recent development in ventricle modeling and its potential application in surgical planning and management of tetralogy of Fallot (ToF) patients. Aspects of data acquisition, model selection and construction, tissue material properties, ventricle layer structure and tissue fiber orientations, pressure condition, model validation and virtual surgery procedures (changing patient-specific ventricle data and perform computer simulation) were reviewed. Results from a case study using patient-specific cardiac magnetic resonance (CMR) imaging and right/left ventricle and patch (RV/LV/Patch) combination model with fluid-structure interactions (FSI) were reported. The models were used to evaluate and optimize human pulmonary valve replacement/insertion (PVR) surgical procedure and patch design and test a surgical hypothesis that PVR with small patch and aggressive scar tissue trimming in PVR surgery may lead to improved recovery of RV function and reduced stress/strain conditions in the patch area. PMID:21344066
NASA Technical Reports Server (NTRS)
Gawronski, W.
2004-01-01
Wind gusts are the main disturbances that depreciate tracking precision of microwave antennas and radiotelescopes. The linear-quadratic-Gaussian (LQG) controllers - as compared with the proportional-and-integral (PI) controllers significantly improve the tracking precision in wind disturbances. However, their properties have not been satisfactorily understood; consequently, their tuning is a trial-and-error process. A control engineer has two tools to tune an LQG controller: the choice of coordinate system of the controller model and the selection of weights of the LQG performance index. This article analyzes properties of an open- and closed-loop antenna. It shows that the proper choice of coordinates of the open-loop model simplifies the shaping of the closed-loop performance. The closed-loop properties are influenced by the LQG weights. The article shows the impact of the weights on the antenna closed-loop bandwidth, disturbance rejection properties, and antenna acceleration. The bandwidth and the disturbance rejection characterize the antenna performance, while the acceleration represents the performance limit set by the antenna hardware (motors). The article presents the controller tuning procedure, based on the coordinate selection and the weight properties. The procedure rationally shapes the closed-loop performance, as an alternative to the trial-and-error approach.
NASA Astrophysics Data System (ADS)
Hofer, Marlis; Mölg, Thomas; Marzeion, Ben; Kaser, Georg
2010-05-01
Recently initiated observation networks in the Cordillera Blanca provide temporally high-resolution, yet short-term atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly NCEP/NCAR reanalysis data to the local target variables, measured at the tropical glacier Artesonraju (Northern Cordillera Blanca). The approach is particular in the context of ESD for two reasons. First, the observational time series for model calibration are short (only about two years). Second, unlike most ESD studies in climate research, we focus on variables at a high temporal resolution (i.e., six-hourly values). Our target variables are two important drivers in the surface energy balance of tropical glaciers; air temperature and specific humidity. The selection of predictor fields from the reanalysis data is based on regression analyses and climatologic considerations. The ESD modelling procedure includes combined empirical orthogonal function and multiple regression analyses. Principal component screening is based on cross-validation using the Akaike Information Criterion as model selection criterion. Double cross-validation is applied for model evaluation. Potential autocorrelation in the time series is considered by defining the block length in the resampling procedure. Apart from the selection of predictor fields, the modelling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice by using both single- and mixed-field predictors of the variables air temperature (1000 hPa), specific humidity (1000 hPa), and zonal wind speed (500 hPa). The chosen downscaling domain ranges from 80 to 50 degrees west and from 0 to 20 degrees south. Statistical transfer functions are derived individually for different months and times of day (month/hour-models). The forecast skill of the month/hour-models largely depends on month and time of day, ranging from 0 to 0.8, but the mixed-field predictors generally perform better than the single-field predictors. At all time scales, the ESD model shows added value against two simple reference models; (i) the direct use of reanalysis grid point values, and (ii) mean diurnal and seasonal cycles over the calibration period. The ESD model forecast 1960 to 2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation, but is sensitive to the chosen predictor type. So far, we have not assessed the performance of NCEP/NCAR reanalysis data against other reanalysis products. The developed ESD model is computationally cheap and applicable wherever measurements are available for model calibration.
The effect of mis-specification on mean and selection between the Weibull and lognormal models
NASA Astrophysics Data System (ADS)
Jia, Xiang; Nadarajah, Saralees; Guo, Bo
2018-02-01
The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.
Computer aided stress analysis of long bones utilizing computer tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marom, S.A.
1986-01-01
A computer aided analysis method, utilizing computed tomography (CT) has been developed, which together with a finite element program determines the stress-displacement pattern in a long bone section. The CT data file provides the geometry, the density and the material properties for the generated finite element model. A three-dimensional finite element model of a tibial shaft is automatically generated from the CT file by a pre-processing procedure for a finite element program. The developed pre-processor includes an edge detection algorithm which determines the boundaries of the reconstructed cross-sectional images of the scanned bone. A mesh generation procedure than automatically generatesmore » a three-dimensional mesh of a user-selected refinement. The elastic properties needed for the stress analysis are individually determined for each model element using the radiographic density (CT number) of each pixel with the elemental borders. The elastic modulus is determined from the CT radiographic density by using an empirical relationship from the literature. The generated finite element model, together with applied loads, determined from existing gait analysis and initial displacements, comprise a formatted input for the SAP IV finite element program. The output of this program, stresses and displacements at the model elements and nodes, are sorted and displayed by a developed post-processor to provide maximum and minimum values at selected locations in the model.« less
Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.
2011-01-01
Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927
High Drinking in the Dark Mice: A genetic model of drinking to intoxication
Barkley-Levenson, Amanda M.; Crabbe, John C.
2014-01-01
Drinking to intoxication is a critical component of risky drinking behaviors in humans, such as binge drinking. Previous rodent models of alcohol consumption largely failed to demonstrate that animals were patterning drinking in such a way as to experience intoxication. Therefore, few rodent models of binge-like drinking and no specifically genetic models were available to study possible predisposing genes. The High Drinking in the Dark (HDID) selective breeding project was started to help fill this void, with HDID mice selected for reaching high blood alcohol levels in a limited access procedure. HDID mice now represent a genetic model of drinking to intoxication and can be used to help answer questions regarding predisposition toward this trait as well as potential correlated responses. They should also prove useful for the eventual development of better therapeutic strategies. PMID:24360287
23 CFR 636.202 - When are two-phase design-build selection procedures appropriate?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false When are two-phase design-build selection procedures appropriate? 636.202 Section 636.202 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS DESIGN-BUILD CONTRACTING Selection Procedures, Award Criteria § 636.202 When are two-phase design-build...
Simultaneous Optimization of Decisions Using a Linear Utility Function.
ERIC Educational Resources Information Center
Vos, Hans J.
1990-01-01
An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)
"Wireless": Some Facts and Figures from a Corpus-Driven Study
ERIC Educational Resources Information Center
Rizzo, Camino Rea
2009-01-01
"Wireless" is the word selected to illustrate a model of analysis designed to determine the specialized character of a lexical unit. "Wireless" belongs to the repertoire of specialized vocabulary automatically extracted from a corpus of telecommunication engineering English (TEC). This paper describes the procedure followed in the analysis which…
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
Manoj Kumar, Palanivelu; Karthikeyan, Chandrabose; Hari Narayana Moorthy, Narayana Subbiah; Trivedi, Piyush
2006-11-01
In the present paper, quantitative structure activity relationship (QSAR) approach was applied to understand the affinity and selectivity of a novel series of triaryl imidazole derivatives towards glucagon receptor. Statistically significant and highly predictive QSARs were derived for glucagon receptor inhibition by triaryl imidazoles using QuaSAR descriptors of molecular operating environment (MOE) employing computer-assisted multiple regression procedure. The generated QSAR models revealed that factors related to hydrophobicity, molecular shape and geometry predominantly influences glucagon receptor binding affinity of the triaryl imidazoles indicating the relevance of shape specific steric interactions between the molecule and the receptor. Further, QSAR models formulated for selective inhibition of glucagon receptor over p38 mitogen activated protein (MAP) kinase of the compounds in the series highlights that the same structural features, which influence the glucagon receptor affinity, also contribute to their selective inhibition.
Chen, Bor-Sen
2016-01-01
Bacteria navigate environments full of various chemicals to seek favorable places for survival by controlling the flagella’s rotation using a complicated signal transduction pathway. By influencing the pathway, bacteria can be engineered to search for specific molecules, which has great potential for application to biomedicine and bioremediation. In this study, genetic circuits were constructed to make bacteria search for a specific molecule at particular concentrations in their environment through a synthetic biology method. In addition, by replacing the “brake component” in the synthetic circuit with some specific sensitivities, the bacteria can be engineered to locate areas containing specific concentrations of the molecule. Measured by the swarm assay qualitatively and microfluidic techniques quantitatively, the characteristics of each “brake component” were identified and represented by a mathematical model. Furthermore, we established another mathematical model to anticipate the characteristics of the “brake component”. Based on this model, an abundant component library can be established to provide adequate component selection for different searching conditions without identifying all components individually. Finally, a systematic design procedure was proposed. Following this systematic procedure, one can design a genetic circuit for bacteria to rapidly search for and locate different concentrations of particular molecules by selecting the most adequate “brake component” in the library. Moreover, following simple procedures, one can also establish an exclusive component library suitable for other cultivated environments, promoter systems, or bacterial strains. PMID:27096615
Manual hierarchical clustering of regional geochemical data using a Bayesian finite mixture model
Ellefsen, Karl J.; Smith, David
2016-01-01
Interpretation of regional scale, multivariate geochemical data is aided by a statistical technique called “clustering.” We investigate a particular clustering procedure by applying it to geochemical data collected in the State of Colorado, United States of America. The clustering procedure partitions the field samples for the entire survey area into two clusters. The field samples in each cluster are partitioned again to create two subclusters, and so on. This manual procedure generates a hierarchy of clusters, and the different levels of the hierarchy show geochemical and geological processes occurring at different spatial scales. Although there are many different clustering methods, we use Bayesian finite mixture modeling with two probability distributions, which yields two clusters. The model parameters are estimated with Hamiltonian Monte Carlo sampling of the posterior probability density function, which usually has multiple modes. Each mode has its own set of model parameters; each set is checked to ensure that it is consistent both with the data and with independent geologic knowledge. The set of model parameters that is most consistent with the independent geologic knowledge is selected for detailed interpretation and partitioning of the field samples.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2016-03-01
Different chemometric models were applied for the quantitative analysis of amoxicillin (AMX), and flucloxacillin (FLX) in their binary mixtures, namely, partial least squares (PLS), spectral residual augmented classical least squares (SRACLS), concentration residual augmented classical least squares (CRACLS) and artificial neural networks (ANNs). All methods were applied with and without variable selection procedure (genetic algorithm GA). The methods were used for the quantitative analysis of the drugs in laboratory prepared mixtures and real market sample via handling the UV spectral data. Robust and simpler models were obtained by applying GA. The proposed methods were found to be rapid, simple and required no preliminary separation steps.
NASA Technical Reports Server (NTRS)
Stalmach, C. J., Jr.
1975-01-01
Several model/instrument concepts employing electroless metallic skin were considered for improvement of surface condition, accuracy, and cost of contoured-geometry convective heat transfer models. A plated semi-infinite slab approach was chosen for development and evaluation in a hypersonic wind tunnel. The plated slab model consists of an epoxy casting containing fine constantan wires accurately placed at specified surface locations. An electroless alloy was deposited on the plastic surface that provides a hard, uniformly thick, seamless skin. The chosen alloy forms a high-output thermocouple junction with each exposed constantan wire, providing means of determining heat transfer during tunnel testing of the model. A selective electroless plating procedure was used to deposit scaled heatshield tiles on the lower surface of a 0.0175-scale shuttle orbiter model. Twenty-five percent of the tiles were randomly selected and plated to a height of 0.001-inch. The purpose was to assess the heating effects of surface roughness simulating misalignment of tiles that may occur during manufacture of the spacecraft.
Forner-Cordero, A; Mateu-Arce, M; Forner-Cordero, I; Alcántara, E; Moreno, J C; Pons, J L
2008-04-01
A common problem shared by accelerometers, inertial sensors and any motion measurement method based on skin-mounted sensors is the movement of the soft tissues covering the bones. The aim of this work is to propose a method for the validation of the attachment of skin-mounted sensors. A second-order (mass-spring-damper) model was proposed to characterize the behaviour of the soft tissue between the bone and the sensor. Three sets of experiments were performed. In the first one, different procedures to excite the system were evaluated to select an adequate excitation stimulus. In the second one, the selected stimulus was applied under varying attachment conditions while the third experiment was used to test the model. The heel drop was chosen as the excitation method because it showed lower variability and could discriminate between different attachment conditions. There was, in agreement with the model, a trend to increase the natural frequency of the system with decreasing accelerometer mass. An important result is the development of a standard procedure to test the bandwidth of skin-mounted inertial sensors, such as accelerometers mounted on the skin or markers heavier than a few grams.
Probabilistic micromechanics for metal matrix composites
NASA Astrophysics Data System (ADS)
Engelstad, S. P.; Reddy, J. N.; Hopkins, Dale A.
A probabilistic micromechanics-based nonlinear analysis procedure is developed to predict and quantify the variability in the properties of high temperature metal matrix composites. Monte Carlo simulation is used to model the probabilistic distributions of the constituent level properties including fiber, matrix, and interphase properties, volume and void ratios, strengths, fiber misalignment, and nonlinear empirical parameters. The procedure predicts the resultant ply properties and quantifies their statistical scatter. Graphite copper and Silicon Carbide Titanlum Aluminide (SCS-6 TI15) unidirectional plies are considered to demonstrate the predictive capabilities. The procedure is believed to have a high potential for use in material characterization and selection to precede and assist in experimental studies of new high temperature metal matrix composites.
NASTRAN/FLEXSTAB procedure for static aeroelastic analysis
NASA Technical Reports Server (NTRS)
Schuster, L. S.
1984-01-01
Presented is a procedure for using the FLEXSTAB External Structural Influence Coefficients (ESIC) computer program to produce the structural data necessary for the FLEXSTAB Stability Derivatives and Static Stability (SD&SS) program. The SD&SS program computes trim state, stability derivatives, and pressure and deflection data for a flexible airplane having a plane of symmetry. The procedure used a NASTRAN finite-element structural model as the source of structural data in the form of flexibility matrices. Selection of a set of degrees of freedom, definition of structural nodes and panels, reordering and reformatting of the flexibility matrix, and redistribution of existing point mass data are among the topics discussed. Also discussed are boundary conditions and the NASTRAN substructuring technique.
Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián
2013-01-01
In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.
Failure mechanisms in energy-absorbing composite structures
NASA Astrophysics Data System (ADS)
Johnson, Alastair F.; David, Matthew
2010-11-01
Quasi-static tests are described for determination of the energy-absorption properties of composite crash energy-absorbing segment elements under axial loads. Detailed computer tomography scans of failed specimens were used to identify local compression crush failure mechanisms at the crush front. These mechanisms are important for selecting composite materials for energy-absorbing structures, such as helicopter and aircraft sub-floors. Finite element models of the failure processes are described that could be the basis for materials selection and future design procedures for crashworthy structures.
NASA Technical Reports Server (NTRS)
Roller, N. E. G.; Colwell, J. E.; Sellman, A. N.
1985-01-01
A study undertaken in support of NASA's Global Habitability Program is described. A demonstration of geographic information system (GIS) technology for site evaluation and selection is given. The objective was to locate potential fuelwood plantations within a 50 km radius of Nairobi, Kenya. A model was developed to evaluate site potential based on capability and suitability criteria and implemented using the Environmental Research Institute of Michigan's geographic information system.
41 CFR 60-3.6 - Use of selection procedures which have not been validated.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) General Principles § 60-3.6 Use of selection procedures which have not been validated. A. Use of alternate... 41 Public Contracts and Property Management 1 2010-07-01 2010-07-01 true Use of selection procedures which have not been validated. 60-3.6 Section 60-3.6 Public Contracts and Property Management...
The Choice of Spatial Interpolation Method Affects Research Conclusions
NASA Astrophysics Data System (ADS)
Eludoyin, A. O.; Ijisesan, O. S.; Eludoyin, O. M.
2017-12-01
Studies from developing countries using spatial interpolations in geographical information systems (GIS) are few and recent. Many of the studies have adopted interpolation procedures including kriging, moving average or Inverse Weighted Average (IDW) and nearest point without the necessary recourse to their uncertainties. This study compared the results of modelled representations of popular interpolation procedures from two commonly used GIS software (ILWIS and ArcGIS) at the Obafemi Awolowo University, Ile-Ife, Nigeria. Data used were concentrations of selected biochemical variables (BOD5, COD, SO4, NO3, pH, suspended and dissolved solids) in Ere stream at Ayepe-Olode, in the southwest Nigeria. Water samples were collected using a depth-integrated grab sampling approach at three locations (upstream, downstream and along a palm oil effluent discharge point in the stream); four stations were sited along each location (Figure 1). Data were first subjected to examination of their spatial distributions and associated variogram variables (nugget, sill and range), using the PAleontological STatistics (PAST3), before the mean values were interpolated in selected GIS software for the variables using each of kriging (simple), moving average and nearest point approaches. Further, the determined variogram variables were substituted with the default values in the selected software, and their results were compared. The study showed that the different point interpolation methods did not produce similar results. For example, whereas the values of conductivity was interpolated to vary as 120.1 - 219.5 µScm-1 with kriging interpolation, it varied as 105.6 - 220.0 µScm-1 and 135.0 - 173.9µScm-1 with nearest point and moving average interpolations, respectively (Figure 2). It also showed that whereas the computed variogram model produced the best fit lines (with least associated error value, Sserror) with Gaussian model, the Spherical model was assumed default for all the distributions in the software, such that the value of nugget was assumed as 0.00, when it was rarely so (Figure 3). The study concluded that interpolation procedures may affect decisions and conclusions on modelling inferences.
5 CFR 720.206 - Selection guidelines.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Selection guidelines. 720.206 Section 720... guidelines. This subpart sets forth requirements for a recruitment program, not a selection program... procedures and criteria must be consistent with the Uniform Guidelines on Employee Selection Procedures (43...
Selective Heart, Brain and Body Perfusion in Open Aortic Arch Replacement.
Maier, Sven; Kari, Fabian; Rylski, Bartosz; Siepe, Matthias; Benk, Christoph; Beyersdorf, Friedhelm
2016-09-01
Open aortic arch replacement is a complex and challenging procedure, especially in post dissection aneurysms and in redo procedures after previous surgery of the ascending aorta or aortic root. We report our experience with the simultaneous selective perfusion of heart, brain, and remaining body to ensure optimal perfusion and to minimize perfusion-related risks during these procedures. We used a specially configured heart-lung machine with a centrifugal pump as arterial pump and an additional roller pump for the selective cerebral perfusion. Initial arterial cannulation is achieved via femoral artery or right axillary artery. After lower body circulatory arrest and selective antegrade cerebral perfusion for the distal arch anastomosis, we started selective lower body perfusion simultaneously to the selective antegrade cerebral perfusion and heart perfusion. Eighteen patients were successfully treated with this perfusion strategy from October 2012 to November 2015. No complications related to the heart-lung machine and the cannulation occurred during the procedures. Mean cardiopulmonary bypass time was 239 ± 33 minutes, the simultaneous selective perfusion of brain, heart, and remaining body lasted 55 ± 23 minutes. One patient suffered temporary neurological deficit that resolved completely during intensive care unit stay. No patient experienced a permanent neurological deficit or end-organ dysfunction. These high-risk procedures require a concept with a special setup of the heart-lung machine. Our perfusion strategy for aortic arch replacement ensures a selective perfusion of heart, brain, and lower body during this complex procedure and we observed excellent outcomes in this small series. This perfusion strategy is also applicable for redo procedures.
Yoon, Yohan; Geornaras, Ifigenia; Scanga, John A; Belk, Keith E; Smith, Gary C; Kendall, Patricia A; Sofos, John N
2011-08-01
This study developed growth/no growth models for predicting growth boundaries of Listeria monocytogenes on ready-to-eat cured ham and uncured turkey breast slices as a function of lactic acid concentration (0% to 4%), dipping time (0 to 4 min), and storage temperature (4 to 10 °C). A 10-strain composite of L. monocytogenes was inoculated (2 to 3 log CFU/cm²) on slices, followed by dipping into lactic acid and storage in vacuum packages for up to 30 d. Total bacterial (tryptic soy agar plus 0.6% yeast extract) and L. monocytogenes (PALCAM agar) populations were determined on day 0 and at the endpoint of storage. The combinations of parameters that allowed increases in cell counts of L. monocytogenes of at least l log CFU/cm² were assigned the value of 1, while those limiting growth to <1 log CFU/cm² were given the value of 0. The binary data were used in logistic regression analysis for development of models to predict boundaries between growth and no growth of the pathogen at desired probabilities. Indices of model performance and validation with limited available data indicated that the models developed had acceptable goodness of fit. Thus, the described procedures using bacterial growth data from studies with food products may be appropriate in developing growth/no growth models to predict growth and to select lactic acid concentrations and dipping times for control of L. monocytogenes. The models developed in this study may be useful in selecting lactic acid concentrations and dipping times to control growth of Listeria monocytogenes on cured ham and uncured turkey breast during product storage, and in determining probabilities of growth under selected conditions. The modeling procedures followed may also be used for application in model development for other products, conditions, or pathogens. © 2011 Institute of Food Technologists®
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen; Bian, Xin; Yang, Xiu
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics basedmore » on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models are not interchangeable. If the free parameters are properly selected, the reverse CG procedure also yields an effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen; Bian, Xin; Karniadakis, George Em, E-mail: george-karniadakis@brown.edu
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamicsmore » based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.« less
Assessing the formability of metallic sheets by means of localized and diffuse necking models
NASA Astrophysics Data System (ADS)
Comşa, Dan-Sorin; Lǎzǎrescu, Lucian; Banabic, Dorel
2016-10-01
The main objective of the paper consists in elaborating a unified framework that allows the theoretical assessment of sheet metal formability. Hill's localized necking model and the Extended Maximum Force Criterion proposed by Mattiasson, Sigvant, and Larsson have been selected for this purpose. Both models are thoroughly described together with their solution procedures. A comparison of the theoretical predictions with experimental data referring to the formability of a DP600 steel sheet is also presented by the authors.
Pricing of common cosmetic surgery procedures: local economic factors trump supply and demand.
Richardson, Clare; Mattison, Gennaya; Workman, Adrienne; Gupta, Subhas
2015-02-01
The pricing of cosmetic surgery procedures has long been thought to coincide with laws of basic economics, including the model of supply and demand. However, the highly variable prices of these procedures indicate that additional economic contributors are probable. The authors sought to reassess the fit of cosmetic surgery costs to the model of supply and demand and to determine the driving forces behind the pricing of cosmetic surgery procedures. Ten plastic surgery practices were randomly selected from each of 15 US cities of various population sizes. Average prices of breast augmentation, mastopexy, abdominoplasty, blepharoplasty, and rhytidectomy in each city were compared with economic and demographic statistics. The average price of cosmetic surgery procedures correlated substantially with population size (r = 0.767), cost-of-living index (r = 0.784), cost to own real estate (r = 0.714), and cost to rent real estate (r = 0.695) across the 15 US cities. Cosmetic surgery pricing also was found to correlate (albeit weakly) with household income (r = 0.436) and per capita income (r = 0.576). Virtually no correlations existed between pricing and the density of plastic surgeons (r = 0.185) or the average age of residents (r = 0.076). Results of this study demonstrate a correlation between costs of cosmetic surgery procedures and local economic factors. Cosmetic surgery pricing cannot be completely explained by the supply-and-demand model because no association was found between procedure cost and the density of plastic surgeons. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Estimating Traffic Accidents in Turkey Using Differential Evolution Algorithm
NASA Astrophysics Data System (ADS)
Akgüngör, Ali Payıdar; Korkmaz, Ersin
2017-06-01
Estimating traffic accidents play a vital role to apply road safety procedures. This study proposes Differential Evolution Algorithm (DEA) models to estimate the number of accidents in Turkey. In the model development, population (P) and the number of vehicles (N) are selected as model parameters. Three model forms, linear, exponential and semi-quadratic models, are developed using DEA with the data covering from 2000 to 2014. Developed models are statistically compared to select the best fit model. The results of the DE models show that the linear model form is suitable to estimate the number of accidents. The statistics of this form is better than other forms in terms of performance criteria which are the Mean Absolute Percentage Errors (MAPE) and the Root Mean Square Errors (RMSE). To investigate the performance of linear DE model for future estimations, a ten-year period from 2015 to 2024 is considered. The results obtained from future estimations reveal the suitability of DE method for road safety applications.
Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu
2017-11-20
Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p < 0.01). This study suggests that adjustment of the price of medical procedures based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.
Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe
2003-11-06
We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.
Space Shuttle Main Engine performance analysis
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both measurement and balance uncertainty estimates. The reconciler attempts to select operational parameters that minimize the difference between theoretical prediction and observation. Selected values are further constrained to fall within measurement uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME subsystems. The parameter selection problem described above is a traditional nonlinear programming problem. The reconciler employs a mixed penalty method to determine optimum values of SSME operating parameters associated with this problem formulation.
Wright, Michael T; Parker, David R; Amrhein, Christopher
2003-10-15
Sequential extraction procedures (SEPs) have been widely used to characterize the mobility, bioavailibility, and potential toxicity of trace elements in soils and sediments. Although oft-criticized, these methods may perform best with redox-labile elements (As, Hg, Se) for which more discrete biogeochemical phases may arise from variations in oxidation number. We critically evaluated two published SEPs for Se for their specificity and precision by applying them to four discrete components in an inert silica matrix: soluble Se(VI) (selenate), Se(IV) (selenite) adsorbed onto goethite, elemental Se, and a metal selenide (FeSe; achavalite). These were extracted both individually and in a mixed model sediment. The more selective of the two procedures was modified to further improve its selectivity (SEP 2M). Both SEP 1 and SEP 2M quantitatively recovered soluble selenate but yielded incomplete recoveries of adsorbed selenite (64% and 81%, respectively). SEP 1 utilizes 0.1 M K2S2O8 to target "organically associated" Se, but this extractant also solubilized most of the elemental (64%) and iron selenide (91%) components of the model sediment. In SEP 2M, the Na2SO3 used in step III is effective in extracting elemental Se but also extracted 17% of the Se from the iron selenide, such that the elemental fraction would be overestimated should both forms coexist. Application of SEP 2M to eight wetland sediments further suggested that the Na2SO3 in step III extracts some organically associated Se, so a NaOH extraction was inserted beforehand to yield a further modification, SEP 2OH. Results using this five-step procedure suggested that the four-step SEP 2M could overestimate elemental Se by as much as 43% due to solubilization of organic Se. Although still imperfect in its selectivity, SEP 20H may be the most suitable procedure for routine, accurate fractionation of Se in soils and sediments. However, the strong oxidant (NaOCl) used in the final step cannot distinguish between refractory organic forms of Se and pyritic Se that might form under sulfur-reducing conditions.
NASA Astrophysics Data System (ADS)
Bora, Puran S.; Hu, Zhiwei; Tezel, Tongalp H.; Sohn, Jeong-Hyeon; Kang, Shin Goo; Cruz, Jose M. C.; Bora, Nalini S.; Garen, Alan; Kaplan, Henry J.
2003-03-01
Age-related macular degeneration (AMD) is the leading cause of blindness after age 55 in the industrialized world. Severe loss of central vision frequently occurs with the exudative (wet) form of AMD, as a result of the formation of a pathological choroidal neovasculature (CNV) that damages the macular region of the retina. We tested the effect of an immunotherapy procedure, which had been shown to destroy the pathological neovasculature in solid tumors, on the formation of laser-induced CNV in a mouse model simulating exudative AMD in humans. The procedure involves administering an Icon molecule that binds with high affinity and specificity to tissue factor (TF), resulting in the activation of a potent cytolytic immune response against cells expressing TF. The Icon binds selectively to TF on the vascular endothelium of a CNV in the mouse and pig models and also on the CNV of patients with exudative AMD. Here we show that the Icon dramatically reduces the frequency of CNV formation in the mouse model. After laser treatment to induce CNV formation, the mice were injected either with an adenoviral vector encoding the Icon, resulting in synthesis of the Icon by vector-infected mouse cells, or with the Icon protein. The route of injection was i.v. or intraocular. The efficacy of the Icon in preventing formation of laser-induced CNV depends on binding selectively to the CNV. Because the Icon binds selectively to the CNV in exudative AMD as well as to laser-induced CNV, the Icon might also be efficacious for treating patients with exudative AMD.
Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.
Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H
2018-01-01
Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.
Qiu, Xing; Hu, Rui; Wu, Zhixin
2014-01-01
Normalization procedures are widely used in high-throughput genomic data analyses to remove various technological noise and variations. They are known to have profound impact to the subsequent gene differential expression analysis. Although there has been some research in evaluating different normalization procedures, few attempts have been made to systematically evaluate the gene detection performances of normalization procedures from the bias-variance trade-off point of view, especially with strong gene differentiation effects and large sample size. In this paper, we conduct a thorough study to evaluate the effects of normalization procedures combined with several commonly used statistical tests and MTPs under different configurations of effect size and sample size. We conduct theoretical evaluation based on a random effect model, as well as simulation and biological data analyses to verify the results. Based on our findings, we provide some practical guidance for selecting a suitable normalization procedure under different scenarios. PMID:24941114
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Screening and clustering of sparse regressions with finite non-Gaussian mixtures.
Zhang, Jian
2017-06-01
This article proposes a method to address the problem that can arise when covariates in a regression setting are not Gaussian, which may give rise to approximately mixture-distributed errors, or when a true mixture of regressions produced the data. The method begins with non-Gaussian mixture-based marginal variable screening, followed by fitting a full but relatively smaller mixture regression model to the selected data with help of a new penalization scheme. Under certain regularity conditions, the new screening procedure is shown to possess a sure screening property even when the population is heterogeneous. We further prove that there exists an elbow point in the associated scree plot which results in a consistent estimator of the set of active covariates in the model. By simulations, we demonstrate that the new procedure can substantially improve the performance of the existing procedures in the content of variable screening and data clustering. By applying the proposed procedure to motif data analysis in molecular biology, we demonstrate that the new method holds promise in practice. © 2016, The International Biometric Society.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 6 2010-10-01 2010-10-01 false Procedures for selecting low theft light duty... TRUCK LINES TO BE COVERED BY THE THEFT PREVENTION STANDARD § 542.2 Procedures for selecting low theft... a low theft rate have major parts interchangeable with a majority of the covered major parts of a...
Hilbeck, Angelika; Bundschuh, Rebecca; Bundschuh, Mirco; Hofmann, Frieder; Oehen, Bernadette; Otto, Mathias; Schulz, Ralf; Trtikova, Miluse
2017-11-01
For a long time, the environmental risk assessment (ERA) of genetically modified (GM) crops focused mainly on terrestrial ecosystems. This changed when it was scientifically established that aquatic ecosystems are exposed to GM crop residues that may negatively affect aquatic species. To assist the risk assessment process, we present a tool to identify ecologically relevant species usable in tiered testing prior to authorization or for biological monitoring in the field. The tool is derived from a selection procedure for terrestrial ecosystems with substantial but necessary changes to adequately consider the differences in the type of ecosystems. By using available information from the Water Framework Directive (2000/60/EC), the procedure can draw upon existing biological data on aquatic systems. The proposed procedure for aquatic ecosystems was tested for the first time during an expert workshop in 2013, using the cultivation of Bacillus thuringiensis (Bt) maize as the GM crop and 1 stream type as the receiving environment in the model system. During this workshop, species executing important ecological functions in aquatic environments were identified in a stepwise procedure according to predefined ecological criteria. By doing so, we demonstrated that the procedure is practicable with regard to its goal: From the initial long list of 141 potentially exposed aquatic species, 7 species and 1 genus were identified as the most suitable candidates for nontarget testing programs. Integr Environ Assess Manag 2017;13:974-979. © 2017 SETAC. © 2017 SETAC.
RRegrs: an R package for computer-aided model selection with multiple regression models.
Tsiliki, Georgia; Munteanu, Cristian R; Seoane, Jose A; Fernandez-Lozano, Carlos; Sarimveis, Haralambos; Willighagen, Egon L
2015-01-01
Predictive regression models can be created with many different modelling approaches. Choices need to be made for data set splitting, cross-validation methods, specific regression parameters and best model criteria, as they all affect the accuracy and efficiency of the produced predictive models, and therefore, raising model reproducibility and comparison issues. Cheminformatics and bioinformatics are extensively using predictive modelling and exhibit a need for standardization of these methodologies in order to assist model selection and speed up the process of predictive model development. A tool accessible to all users, irrespectively of their statistical knowledge, would be valuable if it tests several simple and complex regression models and validation schemes, produce unified reports, and offer the option to be integrated into more extensive studies. Additionally, such methodology should be implemented as a free programming package, in order to be continuously adapted and redistributed by others. We propose an integrated framework for creating multiple regression models, called RRegrs. The tool offers the option of ten simple and complex regression methods combined with repeated 10-fold and leave-one-out cross-validation. Methods include Multiple Linear regression, Generalized Linear Model with Stepwise Feature Selection, Partial Least Squares regression, Lasso regression, and Support Vector Machines Recursive Feature Elimination. The new framework is an automated fully validated procedure which produces standardized reports to quickly oversee the impact of choices in modelling algorithms and assess the model and cross-validation results. The methodology was implemented as an open source R package, available at https://www.github.com/enanomapper/RRegrs, by reusing and extending on the caret package. The universality of the new methodology is demonstrated using five standard data sets from different scientific fields. Its efficiency in cheminformatics and QSAR modelling is shown with three use cases: proteomics data for surface-modified gold nanoparticles, nano-metal oxides descriptor data, and molecular descriptors for acute aquatic toxicity data. The results show that for all data sets RRegrs reports models with equal or better performance for both training and test sets than those reported in the original publications. Its good performance as well as its adaptability in terms of parameter optimization could make RRegrs a popular framework to assist the initial exploration of predictive models, and with that, the design of more comprehensive in silico screening applications.Graphical abstractRRegrs is a computer-aided model selection framework for R multiple regression models; this is a fully validated procedure with application to QSAR modelling.
47 CFR 90.165 - Procedures for mutually exclusive applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... grant, pursuant to § 1.935 of this chapter. (1) Selection methods. In selecting the application to grant, the Commission may use competitive bidding, random selection, or comparative hearings, depending on... chapter, either before or after employing selection procedures. (3) Type of filing group used. Except as...
The High Citadel: The Influence of Harvard Law School.
ERIC Educational Resources Information Center
Seligman, Joel
The history of Harvard Law School, a modern critique, and a proposed new model for American legal education are covered in this book by a Harvard Law graduate. Harvard Law School is called the "high citadel" of American legal education. Its admissions procedures, faculty selection, curriculum, teaching methods, and placement practices…
Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT
ERIC Educational Resources Information Center
Cheng, Ying; Chang, Hua-Hua; Yi, Qing
2007-01-01
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
Reading and Spelling in Adults: Are There Lexical and Sub-Lexical Subtypes?
ERIC Educational Resources Information Center
Burt, Jennifer S.; Heffernan, Maree E.
2012-01-01
The dual-route model of reading proposes distinct lexical and sub-lexical procedures for word reading and spelling. Lexically reliant and sub-lexically reliant reader subgroups were selected from 78 university students on the basis of their performance on lexical (orthographic) and sub-lexical (phonological) choice tests, and on irregular and…
Life History Influences on Holland Vocational Type Development. ASHE 1988 Annual Meeting Paper.
ERIC Educational Resources Information Center
Smart, John C.
The relative influence of selected life history experiences on the development of three vocational types (investigative, social, and enterprising) proposed by J. L. Holland is studied using causal modeling procedures. The lack of explicitness in the developmental postulates of Holland's theory is seen as a major deficiency. Among the principal…
10 CFR 431.325 - Units to be tested.
Code of Federal Regulations, 2011 CFR
2011-01-01
... EQUIPMENT Metal Halide Lamp Ballasts and Fixtures Test Procedures § 431.325 Units to be tested. For each basic model of metal halide lamp ballast selected for testing, a sample of sufficient size, no less than... energy efficiency calculated as the measured output power to the lamp divided by the measured input power...
Phenomenological Behavior-Exchange Models of Marital Success.
ERIC Educational Resources Information Center
Gottman, John; And Others
The objective of two studies was to devise an assessment procedure for the evaluation of therapy with distressed marriages. An extension of behavior exchange theory was proposed to include phenomenological ratings by the couple of the intent of messages sent and the impact of messages received. Convergent criteria were used to select 14…
An Intelligent CAI Monitor and Generative Tutor. Final Report.
ERIC Educational Resources Information Center
Koffman, Elliot B.; Perry, James
This final report summarizes research findings and presents a model for generative computer assisted instruction (CAI) with respect to its usefulness in the classroom environment. Methods used to individualize instruction, and the evolution of a procedure used to select a concept for presentation to a student with the generative CAI system are…
40 CFR 1037.525 - Special procedures for testing hybrid vehicles with power take-off.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of this section to allow testing hybrid vehicles other than electric-battery hybrids, consistent with... model, use good engineering judgment to select the vehicle type with the maximum number of PTO circuits... as needed to stabilize the battery at a full state of charge. For electric hybrid vehicles, we...
76 FR 54141 - Airworthiness Directives; Cessna Aircraft Company Model 680 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-31
... rulemaking (NPRM). SUMMARY: We propose to adopt a new airworthiness directive (AD) for the products listed... airplane flight manual to include procedures to use when the left or right generator is selected OFF. This... cross-feed inputs on the left- and right-hand fuel control cards being connected together and causing an...
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Adaptive Texture Synthesis for Large Scale City Modeling
NASA Astrophysics Data System (ADS)
Despine, G.; Colleu, T.
2015-02-01
Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.
Signatures of ecological processes in microbial community time series.
Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie
2018-06-28
Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.
NASA Astrophysics Data System (ADS)
Cama, M.; Lombardo, L.; Conoscenti, C.; Rotigliano, E.
2017-07-01
Debris flows can be described as rapid gravity-induced mass movements controlled by topography that are usually triggered as a consequence of storm rainfalls. One of the problems when dealing with debris flow recognition is that the eroded surface is usually very shallow and it can be masked by vegetation or fast weathering as early as one-two years after a landslide has occurred. For this reason, even areas that are highly susceptible to debris flow might suffer of a lack of reliable landslide inventories. However, these inventories are necessary for susceptibility assessment. Model transferability, which is based on calibrating a susceptibility model in a training area in order to predict the distribution of debris flows in a target area, might provide an efficient solution to dealing with this limit. However, when applying a transferability procedure, a key point is the optimal selection of the predictors to be included for calibrating the model in the source area. In this paper, the issue of optimal factor selection is analysed by comparing the predictive performances obtained following three different factor selection criteria. The study includes: i) a test of the similarity between the source and the target areas; ii) the calibration of the susceptibility model in the (training) source area, using different criteria for the selection of the predictors; iii) the validation of the models, both at the source (self-validation, through random partition) and at the target (transferring, through spatial partition) areas. The debris flow susceptibility is evaluated here using binary logistic regression through a R-scripted based procedure. Two separate study areas were selected in the Messina province (southern Italy) in its Ionian (Itala catchment) and Tyrrhenian sides (Saponara catchment), each hit by a severe debris flow event (in 2009 and 2011, respectively). The investigation attested that the best fitting model in the calibration areas resulted poorly performing in predicting the landslides of the test target area. At the same time, the susceptibility models calibrated with an optimal set of covariates in the source area allowed us to produce a robust and accurate prediction image for the debris flows activated in the Saponara catchment in 2011, exploiting only the data known after the Itala-2009 event.
Vasconcelos, A G; Almeida, R M; Nobre, F F
2001-08-01
This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.
Yang, Yu-Chiao; Wei, Ming-Chi
2018-06-30
This study compared the use of ultrasound-assisted supercritical CO 2 (USC-CO 2 ) extraction to obtain apigenin-rich extracts from Scutellaria barbata D. Don with that of conventional supercritical CO 2 (SC-CO 2 ) extraction and heat-reflux extraction (HRE), conducted in parallel. This green procedure yielded 20.1% and 31.6% more apigenin than conventional SC-CO 2 extraction and HRE, respectively. Moreover, the extraction time required by the USC-CO 2 procedure, which used milder conditions, was approximately 1.9 times and 2.4 times shorter than that required by conventional SC-CO 2 extraction and HRE, respectively. Furthermore, the theoretical solubility of apigenin in the supercritical fluid system was obtained from the USC-CO 2 dynamic extraction curves and was in good agreement with the calculated values for the three empirical density-based models. The second-order kinetics model was further applied to evaluate the kinetics of USC-CO 2 extraction. The results demonstrated that the selected model allowed the evaluation of the extraction rate and extent of USC-CO 2 extraction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Additional Samples: Where They Should Be Located
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilger, G. G., E-mail: jfelipe@ufrgs.br; Costa, J. F. C. L.; Koppe, J. C.
2001-09-15
Information for mine planning requires to be close spaced, if compared to the grid used for exploration and resource assessment. The additional samples collected during quasimining usually are located in the same pattern of the original diamond drillholes net but closer spaced. This procedure is not the best in mathematical sense for selecting a location. The impact of an additional information to reduce the uncertainty about the parameter been modeled is not the same everywhere within the deposit. Some locations are more sensitive in reducing the local and global uncertainty than others. This study introduces a methodology to select additionalmore » sample locations based on stochastic simulation. The procedure takes into account data variability and their spatial location. Multiple equally probable models representing a geological attribute are generated via geostatistical simulation. These models share basically the same histogram and the same variogram obtained from the original data set. At each block belonging to the model a value is obtained from the n simulations and their combination allows one to access local variability. Variability is measured using an uncertainty index proposed. This index was used to map zones of high variability. A value extracted from a given simulation is added to the original data set from a zone identified as erratic in the previous maps. The process of adding samples and simulation is repeated and the benefit of the additional sample is evaluated. The benefit in terms of uncertainty reduction is measure locally and globally. The procedure showed to be robust and theoretically sound, mapping zones where the additional information is most beneficial. A case study in a coal mine using coal seam thickness illustrates the method.« less
Study of cryogenic propellant systems for loading the space shuttle
NASA Technical Reports Server (NTRS)
Voth, R. O.; Steward, W. G.; Hall, W. J.
1974-01-01
Computer programs were written to model the liquid oxygen loading system for the space shuttle. The programs allow selection of input data through graphic displays which schematically depict the part of the system being modeled. The computed output is also displayed in the form of graphs and printed messages. Any one of six computation options may be selected. The first four of these pertain to thermal stresses, pressure surges, cooldown times, flow rates and pressures during cooldown. Options five and six deal with possible water hammer effects due to closing of valves, steady flow and transient response to changes in operating conditions after cooldown. Procedures are given for operation of the graphic display unit and minicomputer.
Nelson, Carl A; Miller, David J; Oleynikov, Dmitry
2008-01-01
As modular systems come into the forefront of robotic telesurgery, streamlining the process of selecting surgical tools becomes an important consideration. This paper presents a method for optimal queuing of tools in modular surgical tool systems, based on patterns in tool-use sequences, in order to minimize time spent changing tools. The solution approach is to model the set of tools as a graph, with tool-change frequency expressed as edge weights in the graph, and to solve the Traveling Salesman Problem for the graph. In a set of simulations, this method has shown superior performance at optimizing tool arrangements for streamlining surgical procedures.
ERIC Educational Resources Information Center
Bennett, Mick; Wakeford, Richard
This guide is intended to help those responsible for choosing health care trainees to develop and improve their selection procedures. Special reference is given to health workers in maternal and child health. Chapter 1 deals with health care policy implications for selection of trainees, the different functions of selection and conflicts that…
Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing
2018-06-01
Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pons, M.; Bernard, C.; Rouch, H.; Madar, R.
1995-10-01
The purpose of this article is to present the modelling routes for the chemical vapour deposition process with a special emphasis on mass transport models with near local thermochemical equilibrium imposed in the gas-phase and at the deposition surface. The theoretical problems arising from the linking of the two selected approaches, thermodynamics and mass transport, are shown and a solution procedure is proposed. As an illustration, selected results of thermodynamic and mass transport analysis and of the coupled approach showed that, for the deposition of Si 1- xGe x solid solution at 1300 K (system SiGeClHAr), the thermodynamic heterogeneous stability of the reactive gases and the thermal diffusion led to the germanium depletion of the deposit.
Vittorazzi, C; Amaral Junior, A T; Guimarães, A G; Viana, A P; Silva, F H L; Pena, G F; Daher, R F; Gerhardt, I F S; Oliveira, G H F; Pereira, M G
2017-09-27
Selection indices commonly utilize economic weights, which become arbitrary genetic gains. In popcorn, this is even more evident due to the negative correlation between the main characteristics of economic importance - grain yield and popping expansion. As an option in the use of classical biometrics as a selection index, the optimal procedure restricted maximum likelihood/best linear unbiased predictor (REML/BLUP) allows the simultaneous estimation of genetic parameters and the prediction of genotypic values. Based on the mixed model methodology, the objective of this study was to investigate the comparative efficiency of eight selection indices estimated by REML/BLUP for the effective selection of superior popcorn families in the eighth intrapopulation recurrent selection cycle. We also investigated the efficiency of the inclusion of the variable "expanded popcorn volume per hectare" in the most advantageous selection of superior progenies. In total, 200 full-sib families were evaluated in two different areas in the North and Northwest regions of the State of Rio de Janeiro, Brazil. The REML/BLUP procedure resulted in higher estimated gains than those obtained with classical biometric selection index methodologies and should be incorporated into the selection of progenies. The following indices resulted in higher gains in the characteristics of greatest economic importance: the classical selection index/values attributed by trial, via REML/BLUP, and the greatest genotypic values/expanded popcorn volume per hectare, via REML. The expanded popcorn volume per hectare characteristic enabled satisfactory gains in grain yield and popping expansion; this characteristic should be considered super-trait in popcorn breeding programs.
Building a kinetic Monte Carlo model with a chosen accuracy.
Bhute, Vijesh J; Chatterjee, Abhijit
2013-06-28
The kinetic Monte Carlo (KMC) method is a popular modeling approach for reaching large materials length and time scales. The KMC dynamics is erroneous when atomic processes that are relevant to the dynamics are missing from the KMC model. Recently, we had developed for the first time an error measure for KMC in Bhute and Chatterjee [J. Chem. Phys. 138, 084103 (2013)]. The error measure, which is given in terms of the probability that a missing process will be selected in the correct dynamics, requires estimation of the missing rate. In this work, we present an improved procedure for estimating the missing rate. The estimate found using the new procedure is within an order of magnitude of the correct missing rate, unlike our previous approach where the estimate was larger by orders of magnitude. This enables one to find the error in the KMC model more accurately. In addition, we find the time for which the KMC model can be used before a maximum error in the dynamics has been reached.
Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA
Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui
2014-01-01
Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792
ERIC Educational Resources Information Center
American Association of School Personnel Administrators, Seven Hills, OH.
These guidelines are intended to provide personnel administrators with a means of evaluating their current practices and procedures in teacher selection. The guidelines cover recruitment, hiring criteria, employment interviews, and the follow-up to selection. A suggested personnel selection procedure outlines application, file preparation, and the…
Evaluating synoptic systems in the CMIP5 climate models over the Australian region
NASA Astrophysics Data System (ADS)
Gibson, Peter B.; Uotila, Petteri; Perkins-Kirkpatrick, Sarah E.; Alexander, Lisa V.; Pitman, Andrew J.
2016-10-01
Climate models are our principal tool for generating the projections used to inform climate change policy. Our confidence in projections depends, in part, on how realistically they simulate present day climate and associated variability over a range of time scales. Traditionally, climate models are less commonly assessed at time scales relevant to daily weather systems. Here we explore the utility of a self-organizing maps (SOMs) procedure for evaluating the frequency, persistence and transitions of daily synoptic systems in the Australian region simulated by state-of-the-art global climate models. In terms of skill in simulating the climatological frequency of synoptic systems, large spread was observed between models. A positive association between all metrics was found, implying that relative skill in simulating the persistence and transitions of systems is related to skill in simulating the climatological frequency. Considering all models and metrics collectively, model performance was found to be related to model horizontal resolution but unrelated to vertical resolution or representation of the stratosphere. In terms of the SOM procedure, the timespan over which evaluation was performed had some influence on model performance skill measures, as did the number of circulation types examined. These findings have implications for selecting models most useful for future projections over the Australian region, particularly for projections related to synoptic scale processes and phenomena. More broadly, this study has demonstrated the utility of the SOMs procedure in providing a process-based evaluation of climate models.
48 CFR 715.370 - Alternative source selection procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Alternative source selection procedures. 715.370 Section 715.370 Federal Acquisition Regulations System AGENCY FOR INTERNATIONAL DEVELOPMENT CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715...
Predictive modeling of altitude decompression sickness in humans
NASA Technical Reports Server (NTRS)
Kenyon, D. J.; Hamilton, R. W., Jr.; Colley, I. A.; Schreiner, H. R.
1972-01-01
The coding of data on 2,565 individual human altitude chamber tests is reported as part of a selection procedure designed to eliminate individuals who are highly susceptible to decompression sickness, individual aircrew members were exposed to the pressure equivalent of 37,000 feet and observed for one hour. Many entries refer to subjects who have been tested two or three times. This data contains a substantial body of statistical information important to the understanding of the mechanisms of altitude decompression sickness and for the computation of improved high altitude operating procedures. Appropriate computer formats and encoding procedures were developed and all 2,565 entries have been converted to these formats and stored on magnetic tape. A gas loading file was produced.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Ho, H. W.; Kurth, R. E.
1991-01-01
The work performed to develop composite load spectra (CLS) for the Space Shuttle Main Engine (SSME) using probabilistic methods. The three methods were implemented to be the engine system influence model. RASCAL was chosen to be the principal method as most component load models were implemented with the method. Validation of RASCAL was performed. High accuracy comparable to the Monte Carlo method can be obtained if a large enough bin size is used. Generic probabilistic models were developed and implemented for load calculations using the probabilistic methods discussed above. Each engine mission, either a real fighter or a test, has three mission phases: the engine start transient phase, the steady state phase, and the engine cut off transient phase. Power level and engine operating inlet conditions change during a mission. The load calculation module provides the steady-state and quasi-steady state calculation procedures with duty-cycle-data option. The quasi-steady state procedure is for engine transient phase calculations. In addition, a few generic probabilistic load models were also developed for specific conditions. These include the fixed transient spike model, the poison arrival transient spike model, and the rare event model. These generic probabilistic load models provide sufficient latitude for simulating loads with specific conditions. For SSME components, turbine blades, transfer ducts, LOX post, and the high pressure oxidizer turbopump (HPOTP) discharge duct were selected for application of the CLS program. They include static pressure loads and dynamic pressure loads for all four components, centrifugal force for the turbine blade, temperatures of thermal loads for all four components, and structural vibration loads for the ducts and LOX posts.
1981-07-01
for selected procedures g. Pericdontal Procedures Per Dentist Formula: Number of Perio Procedures Comr’.neted* Nunber of Dentists Assigned *See appendi...02336 - Resin, Complex A-2 Selected Endodontic Procedures for Endo teeth per assigned DDS ratio: 03311 - Anterior, 1 Canal Filled 03312 - Anterior, 2 or...04271 - Free Soft Tissue Graft 04272 - Vestibulo.-lasty 04340 - Perio Scale and Root Planning *Some or these procedures are not end-item entities as are
48 CFR 906.102 - Use of competitive procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... accordance with subpart 936.6 and 48 CFR subpart 36.6. (4) Program research and development announcements shall follow the competitive selection procedures for the award of research proposals in accordance with... follow the competitive selection procedures for award of these proposals in accordance with subpart 917...
ERIC Educational Resources Information Center
Lane, David O.
The idea that there was a need for formal study of the methods by which titles are selected for addition to the collections of academic science libraries resulted in this investigation of the selection processes of these libraries. Specifically, the study concentrates on the selection procedures in three sciences: biology, chemistry, and physics.…
From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health.
Fock, Heino O; Kraus, Gerd
2016-01-01
Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems.
From Metaphors to Formalism: A Heuristic Approach to Holistic Assessments of Ecosystem Health
Kraus, Gerd
2016-01-01
Environmental policies employ metaphoric objectives such as ecosystem health, resilience and sustainable provision of ecosystem services, which influence corresponding sustainability assessments by means of normative settings such as assumptions on system description, indicator selection, aggregation of information and target setting. A heuristic approach is developed for sustainability assessments to avoid ambiguity and applications to the EU Marine Strategy Framework Directive (MSFD) and OSPAR assessments are presented. For MSFD, nineteen different assessment procedures have been proposed, but at present no agreed assessment procedure is available. The heuristic assessment framework is a functional-holistic approach comprising an ex-ante/ex-post assessment framework with specifically defined normative and systemic dimensions (EAEPNS). The outer normative dimension defines the ex-ante/ex-post framework, of which the latter branch delivers one measure of ecosystem health based on indicators and the former allows to account for the multi-dimensional nature of sustainability (social, economic, ecological) in terms of modeling approaches. For MSFD, the ex-ante/ex-post framework replaces the current distinction between assessments based on pressure and state descriptors. The ex-ante and the ex-post branch each comprise an inner normative and a systemic dimension. The inner normative dimension in the ex-post branch considers additive utility models and likelihood functions to standardize variables normalized with Bayesian modeling. Likelihood functions allow precautionary target setting. The ex-post systemic dimension considers a posteriori indicator selection by means of analysis of indicator space to avoid redundant indicator information as opposed to a priori indicator selection in deconstructive-structural approaches. Indicator information is expressed in terms of ecosystem variability by means of multivariate analysis procedures. The application to the OSPAR assessment for the southern North Sea showed, that with the selected 36 indicators 48% of ecosystem variability could be explained. Tools for the ex-ante branch are risk and ecosystem models with the capability to analyze trade-offs, generating model output for each of the pressure chains to allow for a phasing-out of human pressures. The Bayesian measure of ecosystem health is sensitive to trends in environmental features, but robust to ecosystem variability in line with state space models. The combination of the ex-ante and ex-post branch is essential to evaluate ecosystem resilience and to adopt adaptive management. Based on requirements of the heuristic approach, three possible developments of this concept can be envisioned, i.e. a governance driven approach built upon participatory processes, a science driven functional-holistic approach requiring extensive monitoring to analyze complete ecosystem variability, and an approach with emphasis on ex-ante modeling and ex-post assessment of well-studied subsystems. PMID:27509185
Implications of the New EEOC Guidelines.
ERIC Educational Resources Information Center
Dhanens, Thomas P.
1979-01-01
In the next few years employers will frequently be confronted with the fact that they cannot rely on undocumented, subjective selection procedures. As long as disparate impact exists in employee selection, employers will be required to validate whatever selection procedures they use. (Author/IRT)
Experimental selective posterior semicircular canal laser deafferentation.
Naguib, Maged B
2005-05-01
In this experimental study, we attempted to perform selective deafferentation of the posterior semicircular canal ampulla of guinea pigs using carbon dioxide laser beam. The results of this study document the efficacy of this procedure in achieving deafferentation of the posterior semicircular canal safely with regards to the other semicircular canals, the otolithic organ and the organ of hearing. Moreover, the procedure is performed with relative ease compared with other procedures previously described for selective deafferentation of the posterior semicircular canal. The clinical application of such a procedure for the treatment of intractable benign paroxysmal positional vertigo in humans is suggested.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Electronic processes in TTF-derived complexes studied by IR spectroscopy
NASA Astrophysics Data System (ADS)
Graja, Andrzej
2001-09-01
We focus our attention on the plasma-edge-like dispersion of the reflectance spectra of the selected bis(ethylenodithio)tetrathiafulvalene (BEDT-TTF)-derived organic conductors. The standard procedure to determine the electron transport parameters in low-dimensional organic conductors consists of fitting the appropriate theoretical models with the experimental reflectance data. This procedure provides us with basic information like plasma frequency, the optical effective mass of charge carriers, their number, mean free path and damping constant. Therefore, it is concluded that the spectroscopy is a powerful tool to study the electronic processes in conducting organic solids.
Mediation in dyadic data at the level of the dyads: a Structural Equation Modeling approach.
Ledermann, Thomas; Macho, Siegfried
2009-10-01
An extended version of the Common Fate Model (CFM) is presented to estimate and test mediation in dyadic data. The model can be used for distinguishable dyad members (e.g., heterosexual couples) or indistinguishable dyad members (e.g., homosexual couples) if (a) the variables measure characteristics of the dyadic relationship or shared external influences that affect both partners; if (b) the causal associations between the variables should be analyzed at the dyadic level; and if (c) the measured variables are reliable indicators of the latent variables. To assess mediation using Structural Equation Modeling, a general three-step procedure is suggested. The first is a selection of a good fitting model, the second a test of the direct effects, and the third a test of the mediating effect by means of bootstrapping. The application of the model along with the procedure for assessing mediation is illustrated using data from 184 couples on marital problems, communication, and marital quality. Differences with the Actor-Partner Interdependence Model and the analysis of longitudinal mediation by using the CFM are discussed.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
Robust estimation for partially linear models with large-dimensional covariates
Zhu, LiPing; Li, RunZe; Cui, HengJian
2014-01-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087
An artificial system for selecting the optimal surgical team.
Saberi, Nahid; Mahvash, Mohsen; Zenati, Marco
2015-01-01
We introduce an intelligent system to optimize a team composition based on the team's historical outcomes and apply this system to compose a surgical team. The system relies on a record of the procedures performed in the past. The optimal team composition is the one with the lowest probability of unfavorable outcome. We use the theory of probability and the inclusion exclusion principle to model the probability of team outcome for a given composition. A probability value is assigned to each person of database and the probability of a team composition is calculated from them. The model allows to determine the probability of all possible team compositions even if there is no recoded procedure for some team compositions. From an analytical perspective, assembling an optimal team is equivalent to minimizing the overlap of team members who have a recurring tendency to be involved with procedures of unfavorable results. A conceptual example shows the accuracy of the proposed system on obtaining the optimal team.
Robust estimation for partially linear models with large-dimensional covariates.
Zhu, LiPing; Li, RunZe; Cui, HengJian
2013-10-01
We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.
Pretest information for a test to validate plume simulation procedures (FA-17)
NASA Technical Reports Server (NTRS)
Hair, L. M.
1978-01-01
The results of an effort to plan a final verification wind tunnel test to validate the recommended correlation parameters and application techniques were presented. The test planning effort was complete except for test site finalization and the associated coordination. Two suitable test sites were identified. Desired test conditions were shown. Subsequent sections of this report present the selected model and test site, instrumentation of this model, planned test operations, and some concluding remarks.
Model reduction in a subset of the original states
NASA Technical Reports Server (NTRS)
Yae, K. H.; Inman, D. J.
1992-01-01
A model reduction method is investigated to provide a smaller structural dynamic model for subsequent structural control design. A structural dynamic model is assumed to be derived from finite element analysis. It is first converted into the state space form, and is further reduced by the internal balancing method. Through the co-ordinate transformation derived from the states that are deleted during reduction, the reduced model is finally expressed with the states that are members of the original states. Therefore, the states in the final reduced model represent the degrees of freedom of the nodes that are selected by the designer. The procedure provides a more practical implementation of model reduction for applications in which specific nodes, such as sensor and/or actuator attachment points, are to be retained in the reduced model. Thus, it ensures that the reduced model is under the same input and output condition as the original physical model. The procedure is applied to two simple examples and comparisons are made between the full and reduced order models. The method can be applied to a linear, continuous and time-invariant model of structural dynamics with nonproportional viscous damping.
The Effect of Curriculum Sample Selection for Medical School
ERIC Educational Resources Information Center
de Visser, Marieke; Fluit, Cornelia; Fransen, Jaap; Latijnhouwers, Mieke; Cohen-Schotanus, Janke; Laan, Roland
2017-01-01
In the Netherlands, students are admitted to medical school through (1) selection, (2) direct access by high pre-university Grade Point Average (pu-GPA), (3) lottery after being rejected in the selection procedure, or (4) lottery. At Radboud University Medical Center, 2010 was the first year we selected applicants. We designed a procedure based on…
48 CFR 570.305 - Two-phase design-build selection procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Two-phase design-build... for Leasehold Interests in Real Property 570.305 Two-phase design-build selection procedures. (a) These procedures apply to acquisitions of leasehold interests if you use the two-phase design-build...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
28 CFR 50.14 - Guidelines on employee selection procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... out are: Assumptions of validity based on a procedure's name or descriptive labels; all forms of... relationship (e.g., correlation coefficient) between performance on a selection procedure and one or more... upon a study involving a large number of subjects and has a low correlation coefficient will be subject...
Chatzistergos, Panagiotis E; Naemi, Roozbeh; Chockalingam, Nachiappan
2015-06-01
This study aims to develop a numerical method that can be used to investigate the cushioning properties of different insole materials on a subject-specific basis. Diabetic footwear and orthotic insoles play an important role for the reduction of plantar pressure in people with diabetes (type-2). Despite that, little information exists about their optimum cushioning properties. A new in-vivo measurement based computational procedure was developed which entails the generation of 2D subject-specific finite element models of the heel pad based on ultrasound indentation. These models are used to inverse engineer the material properties of the heel pad and simulate the contact between plantar soft tissue and a flat insole. After its validation this modelling procedure was utilised to investigate the importance of plantar soft tissue stiffness, thickness and loading for the correct selection of insole material. The results indicated that heel pad stiffness and thickness influence plantar pressure but not the optimum insole properties. On the other hand loading appears to significantly influence the optimum insole material properties. These results indicate that parameters that affect the loading of the plantar soft tissues such as body mass or a person's level of physical activity should be carefully considered during insole material selection. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hetmaniok, Edyta; Hristov, Jordan; Słota, Damian; Zielonka, Adam
2017-05-01
The paper presents the procedure for solving the inverse problem for the binary alloy solidification in a two-dimensional space. This is a continuation of some previous works of the authors investigating a similar problem but in the one-dimensional domain. Goal of the problem consists in identification of the heat transfer coefficient on boundary of the region and in reconstruction of the temperature distribution inside the considered region in case when the temperature measurements in selected points of the alloy are known. Mathematical model of the problem is based on the heat conduction equation with the substitute thermal capacity and with the liquidus and solidus temperatures varying in dependance on the concentration of the alloy component. For describing this concentration the Scheil model is used. Investigated procedure involves also the parallelized Ant Colony Optimization algorithm applied for minimizing a functional expressing the error of approximate solution.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
SAFARI, an On-Line Text-Processing System User's Manual.
ERIC Educational Resources Information Center
Chapin, P.G.; And Others.
This report describes for the potential user a set of procedures for processing textual materials on-line. In this preliminary model an information analyst can scan through messages, reports, and other documents on a display scope and select relevant facts, which are processed linguistically and then stored in the computer in the form of logical…
ERIC Educational Resources Information Center
Padgett, Ryan D.; Salisbury, Mark H.; An, Brian P.; Pascarella, Ernest T.
2010-01-01
The sophisticated analytical techniques available to institutional researchers give them an array of procedures to estimate a causal effect using observational data. But as many quantitative researchers have discovered, access to a wider selection of statistical tools does not necessarily ensure construction of a better analytical model. Moreover,…
And never the twain shall meet? Integrating revenue cycle and supply chain functions.
Matjucha, Karen A; Chung, Bianca
2008-09-01
Four initial steps to implementing a profit and loss management model are: Identify the supplies clinicians are using. Empower stakeholders to remove items that are not commonly used. Reduce factors driving wasted product. Review the chargemaster to ensure that supplies used in selected procedures are represented. Strategically set prices that optimize maximum allowable reimbursement.
Arsiccio, Andrea; Pisano, Roberto
2018-06-01
The present work shows a rational method for the development of the freezing step of a freeze-drying cycle. The current approach to the selection of freezing conditions is still empirical and nonsystematic, thus resulting in poor robustness of control strategy. The final aim of this work is to fill this gap, describing a rational procedure, based on mathematical modeling, for properly choosing the freezing conditions. Mechanistic models are used for the prediction of temperature profiles during freezing and dimension of ice crystals being formed. Mathematical description of the drying phase of freeze-drying is also coupled with the results obtained by freezing models, thus providing a comprehensive characterization of the lyophilization process. In this framework, deep understanding of the phenomena involved is required, and according to the Quality by Design approach, this knowledge can be used to build the design space. The step-by-step procedure for building the design space for freezing is thus described, and examples of applications are provided. The calculated design space is validated upon experimental data, and we show that it allows easy control of the freezing process and fast selection of appropriate operating conditions. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.2 Scope. A. Application of... tests and other selection procedures which are used as a basis for any employment decision. Employment... certification may be covered by Federal equal employment opportunity law. Other selection decisions, such as...
Code of Federal Regulations, 2013 CFR
2013-07-01
... GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.2 Scope. A. Application of... tests and other selection procedures which are used as a basis for any employment decision. Employment... certification may be covered by Federal equal employment opportunity law. Other selection decisions, such as...
Code of Federal Regulations, 2012 CFR
2012-07-01
... GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.2 Scope. A. Application of... tests and other selection procedures which are used as a basis for any employment decision. Employment... certification may be covered by Federal equal employment opportunity law. Other selection decisions, such as...
Code of Federal Regulations, 2014 CFR
2014-07-01
... GUIDELINES ON EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 60-3.2 Scope. A. Application of... tests and other selection procedures which are used as a basis for any employment decision. Employment... certification may be covered by Federal equal employment opportunity law. Other selection decisions, such as...
Procedures for Selecting Items for Computerized Adaptive Tests.
ERIC Educational Resources Information Center
Kingsbury, G. Gage; Zara, Anthony R.
1989-01-01
Several classical approaches and alternative approaches to item selection for computerized adaptive testing (CAT) are reviewed and compared. The study also describes procedures for constrained CAT that may be added to classical item selection approaches to allow them to be used for applied testing. (TJH)
NASA Astrophysics Data System (ADS)
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-01
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
3D spherical-cap fitting procedure for (truncated) sessile nano- and micro-droplets & -bubbles.
Tan, Huanshu; Peng, Shuhua; Sun, Chao; Zhang, Xuehua; Lohse, Detlef
2016-11-01
In the study of nanobubbles, nanodroplets or nanolenses immobilised on a substrate, a cross-section of a spherical cap is widely applied to extract geometrical information from atomic force microscopy (AFM) topographic images. In this paper, we have developed a comprehensive 3D spherical-cap fitting procedure (3D-SCFP) to extract morphologic characteristics of complete or truncated spherical caps from AFM images. Our procedure integrates several advanced digital image analysis techniques to construct a 3D spherical-cap model, from which the geometrical parameters of the nanostructures are extracted automatically by a simple algorithm. The procedure takes into account all valid data points in the construction of the 3D spherical-cap model to achieve high fidelity in morphology analysis. We compare our 3D fitting procedure with the commonly used 2D cross-sectional profile fitting method to determine the contact angle of a complete spherical cap and a truncated spherical cap. The results from 3D-SCFP are consistent and accurate, while 2D fitting is unavoidably arbitrary in the selection of the cross-section and has a much lower number of data points on which the fitting can be based, which in addition is biased to the top of the spherical cap. We expect that the developed 3D spherical-cap fitting procedure will find many applications in imaging analysis.
Li, Zhen; Bian, Xin; Yang, Xiu; Karniadakis, George Em
2016-07-28
We construct effective coarse-grained (CG) models for polymeric fluids by employing two coarse-graining strategies. The first one is a forward-coarse-graining procedure by the Mori-Zwanzig (MZ) projection while the other one applies a reverse-coarse-graining procedure, such as the iterative Boltzmann inversion (IBI) and the stochastic parametric optimization (SPO). More specifically, we perform molecular dynamics (MD) simulations of star polymer melts to provide the atomistic fields to be coarse-grained. Each molecule of a star polymer with internal degrees of freedom is coarsened into a single CG particle and the effective interactions between CG particles can be either evaluated directly from microscopic dynamics based on the MZ formalism, or obtained by the reverse methods, i.e., IBI and SPO. The forward procedure has no free parameters to tune and recovers the MD system faithfully. For the reverse procedure, we find that the parameters in CG models cannot be selected arbitrarily. If the free parameters are properly defined, the reverse CG procedure also yields an accurate effective potential. Moreover, we explain how an aggressive coarse-graining procedure introduces the many-body effect, which makes the pairwise potential invalid for the same system at densities away from the training point. From this work, general guidelines for coarse-graining of polymeric fluids can be drawn.
NASA Astrophysics Data System (ADS)
Nunes, Ana
2015-04-01
Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Shape Selectivity of Middle Superior Temporal Sulcus Body Patch Neurons
2017-01-01
Abstract Functional MRI studies in primates have demonstrated cortical regions that are strongly activated by visual images of bodies. The presence of such body patches in macaques allows characterization of the stimulus selectivity of their single neurons. Middle superior temporal sulcus body (MSB) patch neurons showed similar stimulus selectivity for natural, shaded, and textured images compared with their silhouettes, suggesting that shape is an important determinant of MSB responses. Here, we examined and modeled the shape selectivity of single MSB neurons. We measured the responses of single MSB neurons to a variety of shapes producing a wide range of responses. We used an adaptive stimulus sampling procedure, selecting and modifying shapes based on the responses of the neuron. Forty percent of shapes that produced the maximal response were rated by humans as animal-like, but the top shape of many MSB neurons was not judged as resembling a body. We fitted the shape selectivity of MSB neurons with a model that parameterizes shapes in terms of curvature and orientation of contour segments, with a pixel-based model, and with layers of units of convolutional neural networks (CNNs). The deep convolutional layers of CNNs provided the best goodness-of-fit, with a median explained explainable variance of the neurons’ responses of 77%. The goodness-of-fit increased along the convolutional layers’ hierarchy but was lower for the fully connected layers. Together with demonstrating the successful modeling of single unit shape selectivity with deep CNNs, the data suggest that semantic or category knowledge determines only slightly the single MSB neuron’s shape selectivity. PMID:28660250
Combining Relevance Vector Machines and exponential regression for bearing residual life estimation
NASA Astrophysics Data System (ADS)
Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico
2012-08-01
In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.
Valuing Equal Protection in Aviation Security Screening.
Nguyen, Kenneth D; Rosoff, Heather; John, Richard S
2017-12-01
The growing number of anti-terrorism policies has elevated public concerns about discrimination. Within the context of airport security screening, the current study examines how American travelers value the principle of equal protection by quantifying the "equity premium" that they are willing to sacrifice to avoid screening procedures that result in differential treatments. In addition, we applied the notion of procedural justice to explore the effect of alternative selective screening procedures on the value of equal protection. Two-hundred and twenty-two respondents were randomly assigned to one of three selective screening procedures: (1) randomly, (2) using behavioral indicators, or (3) based on demographic characteristics. They were asked to choose between airlines using either an equal or a discriminatory screening procedure. While the former requires all passengers to be screened in the same manner, the latter mandates all passengers undergo a quick primary screening and, in addition, some passengers are selected for a secondary screening based on a predetermined selection criterion. Equity premiums were quantified in terms of monetary cost, wait time, convenience, and safety compromise. Results show that equity premiums varied greatly across respondents, with many indicating little willingness to sacrifice to avoid inequitable screening, and a smaller minority willing to sacrifice anything to avoid the discriminatory screening. The selective screening manipulation was effective in that equity premiums were greater under selection by demographic characteristics compared to the other two procedures. © 2017 Society for Risk Analysis.
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John Bo
2008-10-29
We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024-1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581-590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6 degrees C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, epsilon of 0.21) and an RMSE of 45.1 degrees C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3 degrees C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5 degrees C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-01-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272
Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.
Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A
2017-01-01
Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.
Nere, Andrew; Hashmi, Atif; Cirelli, Chiara; Tononi, Giulio
2013-01-01
Sleep can favor the consolidation of both procedural and declarative memories, promote gist extraction, help the integration of new with old memories, and desaturate the ability to learn. It is often assumed that such beneficial effects are due to the reactivation of neural circuits in sleep to further strengthen the synapses modified during wake or transfer memories to different parts of the brain. A different possibility is that sleep may benefit memory not by further strengthening synapses, but rather by renormalizing synaptic strength to restore cellular homeostasis after net synaptic potentiation in wake. In this way, the sleep-dependent reactivation of neural circuits could result in the competitive down-selection of synapses that are activated infrequently and fit less well with the overall organization of memories. By using computer simulations, we show here that synaptic down-selection is in principle sufficient to explain the beneficial effects of sleep on the consolidation of procedural and declarative memories, on gist extraction, and on the integration of new with old memories, thereby addressing the plasticity-stability dilemma. PMID:24137153
Algorithm for Video Summarization of Bronchoscopy Procedures
2011-01-01
Background The duration of bronchoscopy examinations varies considerably depending on the diagnostic and therapeutic procedures used. It can last more than 20 minutes if a complex diagnostic work-up is included. With wide access to videobronchoscopy, the whole procedure can be recorded as a video sequence. Common practice relies on an active attitude of the bronchoscopist who initiates the recording process and usually chooses to archive only selected views and sequences. However, it may be important to record the full bronchoscopy procedure as documentation when liability issues are at stake. Furthermore, an automatic recording of the whole procedure enables the bronchoscopist to focus solely on the performed procedures. Video recordings registered during bronchoscopies include a considerable number of frames of poor quality due to blurry or unfocused images. It seems that such frames are unavoidable due to the relatively tight endobronchial space, rapid movements of the respiratory tract due to breathing or coughing, and secretions which occur commonly in the bronchi, especially in patients suffering from pulmonary disorders. Methods The use of recorded bronchoscopy video sequences for diagnostic, reference and educational purposes could be considerably extended with efficient, flexible summarization algorithms. Thus, the authors developed a prototype system to create shortcuts (called summaries or abstracts) of bronchoscopy video recordings. Such a system, based on models described in previously published papers, employs image analysis methods to exclude frames or sequences of limited diagnostic or education value. Results The algorithm for the selection or exclusion of specific frames or shots from video sequences recorded during bronchoscopy procedures is based on several criteria, including automatic detection of "non-informative", frames showing the branching of the airways and frames including pathological lesions. Conclusions The paper focuses on the challenge of generating summaries of bronchoscopy video recordings. PMID:22185344
Gabay, Yafit; Goldfarb, Liat
2017-07-01
Although Attention-Deficit Hyperactivity Disorder (ADHD) is closely linked to executive function deficits, it has recently been attributed to procedural learning impairments that are quite distinct from the former. These observations challenge the ability of the executive function framework solely to account for the diverse range of symptoms observed in ADHD. A recent neurocomputational model emphasizes the role of striatal dopamine (DA) in explaining ADHD's broad range of deficits, but the link between this model and procedural learning impairments remains unclear. Significantly, feedback-based procedural learning is hypothesized to be disrupted in ADHD because of the involvement of striatal DA in this type of learning. In order to test this assumption, we employed two variants of a probabilistic category learning task known from the neuropsychological literature. Feedback-based (FB) and paired associate-based (PA) probabilistic category learning were employed in a non-medicated sample of ADHD participants and neurotypical participants. In the FB task, participants learned associations between cues and outcomes initially by guessing and subsequently through feedback indicating the correctness of the response. In the PA learning task, participants viewed the cue and its associated outcome simultaneously without receiving an overt response or corrective feedback. In both tasks, participants were trained across 150 trials. Learning was assessed in a subsequent test without a presentation of the outcome or corrective feedback. Results revealed an interesting disassociation in which ADHD participants performed as well as control participants in the PA task, but were impaired compared with the controls in the FB task. The learning curve during FB training differed between the two groups. Taken together, these results suggest that the ability to incrementally learn by feedback is selectively disrupted in ADHD participants. These results are discussed in relation to both the ADHD dopaminergic dysfunction model and recent findings implicating procedural learning impairments in those with ADHD. Copyright © 2017 Elsevier Inc. All rights reserved.
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2012 CFR
2012-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
29 CFR 1607.7 - Use of other validity studies.
Code of Federal Regulations, 2013 CFR
2013-07-01
... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...
CTEPP STANDARD OPERATING PROCEDURE FOR SAMPLE SELECTION (SOP-1.10)
The procedures for selecting CTEPP study subjects are described in the SOP. The primary, county-level stratification is by region and urbanicity. Six sample counties in each of the two states (North Carolina and Ohio) are selected using stratified random sampling and reflect ...
The response of numerical weather prediction analysis systems to FGGE 2b data
NASA Technical Reports Server (NTRS)
Hollingsworth, A.; Lorenc, A.; Tracton, S.; Arpe, K.; Cats, G.; Uppala, S.; Kallberg, P.
1985-01-01
An intercomparison of analyses of the main PGGE Level IIb data set is presented with three advanced analysis systems. The aims of the work are to estimate the extent and magnitude of the differences between the analyses, to identify the reasons for the differences, and finally to estimate the significance of the differences. Extratropical analyses only are considered. Objective evaluations of analysis quality, such as fit to observations, statistics of analysis differences, and mean fields are discussed. In addition, substantial emphasis is placed on subjective evaluation of a series of case studies that were selected to illustrate the importance of different aspects of the analysis procedures, such as quality control, data selection, resolution, dynamical balance, and the role of the assimilating forecast model. In some cases, the forecast models are used as selective amplifiers of analysis differences to assist in deciding which analysis was more nearly correct in the treatment of particular data.
Zheng, Qi; Peng, Limin
2016-01-01
Quantile regression provides a flexible platform for evaluating covariate effects on different segments of the conditional distribution of response. As the effects of covariates may change with quantile level, contemporaneously examining a spectrum of quantiles is expected to have a better capacity to identify variables with either partial or full effects on the response distribution, as compared to focusing on a single quantile. Under this motivation, we study a general adaptively weighted LASSO penalization strategy in the quantile regression setting, where a continuum of quantile index is considered and coefficients are allowed to vary with quantile index. We establish the oracle properties of the resulting estimator of coefficient function. Furthermore, we formally investigate a BIC-type uniform tuning parameter selector and show that it can ensure consistent model selection. Our numerical studies confirm the theoretical findings and illustrate an application of the new variable selection procedure. PMID:28008212
Aguirre, Luis Antonio; Furtado, Edgar Campos
2007-10-01
This paper reviews some aspects of nonlinear model building from data with (gray box) and without (black box) prior knowledge. The model class is very important because it determines two aspects of the final model, namely (i) the type of nonlinearity that can be accurately approximated and (ii) the type of prior knowledge that can be taken into account. Such features are usually in conflict when it comes to choosing the model class. The problem of model structure selection is also reviewed. It is argued that such a problem is philosophically different depending on the model class and it is suggested that the choice of model class should be performed based on the type of a priori available. A procedure is proposed to build polynomial models from data on a Poincaré section and prior knowledge about the first period-doubling bifurcation, for which the normal form is also polynomial. The final models approximate dynamical data in a least-squares sense and, by design, present the first period-doubling bifurcation at a specified value of parameters. The procedure is illustrated by means of simulated examples.
A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…
NASA Astrophysics Data System (ADS)
Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.
2014-03-01
Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.
Model for spectral and chromatographic data
Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA
2002-11-26
A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
NASA Astrophysics Data System (ADS)
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.
Objective calibration of numerical weather prediction models
NASA Astrophysics Data System (ADS)
Voudouri, A.; Khain, P.; Carmona, I.; Bellprat, O.; Grazzini, F.; Avgoustoglou, E.; Bettems, J. M.; Kaufmann, P.
2017-07-01
Numerical weather prediction (NWP) and climate models use parameterization schemes for physical processes, which often include free or poorly confined parameters. Model developers normally calibrate the values of these parameters subjectively to improve the agreement of forecasts with available observations, a procedure referred as expert tuning. A practicable objective multi-variate calibration method build on a quadratic meta-model (MM), that has been applied for a regional climate model (RCM) has shown to be at least as good as expert tuning. Based on these results, an approach to implement the methodology to an NWP model is presented in this study. Challenges in transferring the methodology from RCM to NWP are not only restricted to the use of higher resolution and different time scales. The sensitivity of the NWP model quality with respect to the model parameter space has to be clarified, as well as optimize the overall procedure, in terms of required amount of computing resources for the calibration of an NWP model. Three free model parameters affecting mainly turbulence parameterization schemes were originally selected with respect to their influence on the variables associated to daily forecasts such as daily minimum and maximum 2 m temperature as well as 24 h accumulated precipitation. Preliminary results indicate that it is both affordable in terms of computer resources and meaningful in terms of improved forecast quality. In addition, the proposed methodology has the advantage of being a replicable procedure that can be applied when an updated model version is launched and/or customize the same model implementation over different climatological areas.
29 CFR 1926.1403 - Assembly/Disassembly-selection of manufacturer or employer procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 29 Labor 8 2013-07-01 2013-07-01 false Assembly/Disassembly-selection of manufacturer or employer... CONSTRUCTION Cranes and Derricks in Construction § 1926.1403 Assembly/Disassembly—selection of manufacturer or... applicable to assembly and disassembly, or (b) Employer procedures for assembly and disassembly. Employer...
29 CFR 1926.1403 - Assembly/Disassembly-selection of manufacturer or employer procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 29 Labor 8 2012-07-01 2012-07-01 false Assembly/Disassembly-selection of manufacturer or employer... CONSTRUCTION Cranes and Derricks in Construction § 1926.1403 Assembly/Disassembly—selection of manufacturer or... applicable to assembly and disassembly, or (b) Employer procedures for assembly and disassembly. Employer...
29 CFR 1926.1403 - Assembly/Disassembly-selection of manufacturer or employer procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 29 Labor 8 2014-07-01 2014-07-01 false Assembly/Disassembly-selection of manufacturer or employer... CONSTRUCTION Cranes and Derricks in Construction § 1926.1403 Assembly/Disassembly—selection of manufacturer or... applicable to assembly and disassembly, or (b) Employer procedures for assembly and disassembly. Employer...
29 CFR 1926.1403 - Assembly/Disassembly-selection of manufacturer or employer procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 29 Labor 8 2011-07-01 2011-07-01 false Assembly/Disassembly-selection of manufacturer or employer... CONSTRUCTION Cranes and Derricks in Construction § 1926.1403 Assembly/Disassembly—selection of manufacturer or... applicable to assembly and disassembly, or (b) Employer procedures for assembly and disassembly. Employer...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2014 CFR
2014-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2013 CFR
2013-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
29 CFR 1607.5 - General standards for validity studies.
Code of Federal Regulations, 2012 CFR
2012-07-01
... experience on the job. J. Interim use of selection procedures. Users may continue the use of a selection... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users... which has an adverse impact and which selection procedure has an adverse impact, each user should...
32 CFR 644.63 - Contracting for title evidence.
Code of Federal Regulations, 2011 CFR
2011-07-01
... direct from the Land and Natural Resources Division, Department of Justice, WASH DC 20530. (b) Selection procedure. (1) Normally selection of persons or firms to perform title evidence services will be based upon... negotiations may be conducted. Selections shall be in accordance with the procedures set forth below: (i) A...
Perretta, Silvana; Dallemagne, Bernard; Donatelli, Gianfranco; Diemunsch, Pierre; Marescaux, Jacques
2011-01-01
The most effective treatment of achalasia is Heller myotomy. To explore a submucosal endoscopic myotomy technique tailored on esophageal physiology testing and to compare it with the open technique. Prospective acute and survival comparative study in pigs (n = 12; 35 kg). University animal research center. Eight acute-4 open and 4 endoscopic-myotomies followed by 4 survival endoscopic procedures. Preoperative and postoperative manometry; esophagogastric junction (EGJ) distensibility before and after selective division of muscular fibers at the EGJ and after the myotomy was prolonged to a standard length by using the EndoFLIP Functional Lumen Imaging Probe (Crospon, Galway, Ireland). All procedures were successful, with no intraoperative and postoperative complications. In the survival group, the animals recovered promptly from surgery. Postoperative manometry demonstrated a 50% drop in mean lower esophageal sphincter pressure (LESp) in the endoscopic group (mean preoperative LESp, 22.2 ± 3.3 mm Hg; mean postoperative LESp, 11.34 ± 2.7 mm Hg; P < .005) and a 69% loss in the open procedure group (mean preoperative LESp, 24.2 ± 3.2 mm Hg; mean postoperative LESp, 7.4 ± 4 mm Hg; P < .005). The EndoFLIP monitoring did not show any distensibility difference between the 2 techniques, with the main improvement occurring when the clasp circular fibers were taken. Healthy animal model; small sample. Endoscopic submucosal esophageal myotomy is feasible and safe. The lack of a significant difference in EGJ distensibility between the open and endoscopic procedure is very appealing. Were it to be perfected in a human population, this endoscopic approach could suggest a new strategy in the treatment of selected achalasia patients. Copyright © 2011 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Screening procedure for airborne pollutants emitted from a high-tech industrial complex in Taiwan.
Wang, John H C; Tsai, Ching-Tsan; Chiang, Chow-Feng
2015-11-01
Despite the modernization of computational techniques, atmospheric dispersion modeling remains a complicated task as it involves the use of large amounts of interrelated data with wide variability. The continuously growing list of regulated air pollutants also increases the difficulty of this task. To address these challenges, this study aimed to develop a screening procedure for a long-term exposure scenario by generating a site-specific lookup table of hourly averaged dispersion factors (χ/Q), which could be evaluated by downwind distance, direction, and effective plume height only. To allow for such simplification, the average plume rise was weighted with the frequency distribution of meteorological data so that the prediction of χ/Q could be decoupled from the meteorological data. To illustrate this procedure, 20 receptors around a high-tech complex in Taiwan were selected. Five consecutive years of hourly meteorological data were acquired to generate a lookup table of χ/Q, as well as two regression formulas of plume rise as functions of downwind distance, buoyancy flux, and stack height. To calculate the concentrations for the selected receptors, a six-step Excel algorithm was programmed with four years of emission records and 10 most critical toxics were screened out. A validation check using Industrial Source Complex (ISC3) model with the same meteorological and emission data showed an acceptable overestimate of 6.7% in the average concentration of 10 nearby receptors. The procedure proposed in this study allows practical and focused emission management for a large industrial complex and can therefore be integrated into an air quality decision-making system. Copyright © 2015 Elsevier Ltd. All rights reserved.
Resampling procedures to identify important SNPs using a consensus approach.
Pardy, Christopher; Motyer, Allan; Wilson, Susan
2011-11-29
Our goal is to identify common single-nucleotide polymorphisms (SNPs) (minor allele frequency > 1%) that add predictive accuracy above that gained by knowledge of easily measured clinical variables. We take an algorithmic approach to predict each phenotypic variable using a combination of phenotypic and genotypic predictors. We perform our procedure on the first simulated replicate and then validate against the others. Our procedure performs well when predicting Q1 but is less successful for the other outcomes. We use resampling procedures where possible to guard against false positives and to improve generalizability. The approach is based on finding a consensus regarding important SNPs by applying random forests and the least absolute shrinkage and selection operator (LASSO) on multiple subsamples. Random forests are used first to discard unimportant predictors, narrowing our focus to roughly 100 important SNPs. A cross-validation LASSO is then used to further select variables. We combine these procedures to guarantee that cross-validation can be used to choose a shrinkage parameter for the LASSO. If the clinical variables were unavailable, this prefiltering step would be essential. We perform the SNP-based analyses simultaneously rather than one at a time to estimate SNP effects in the presence of other causal variants. We analyzed the first simulated replicate of Genetic Analysis Workshop 17 without knowledge of the true model. Post-conference knowledge of the simulation parameters allowed us to investigate the limitations of our approach. We found that many of the false positives we identified were substantially correlated with genuine causal SNPs.
A Finite Element Procedure for Calculating Fluid-Structure Interaction Using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Chargin, Mladen; Gartmeier, Otto
1990-01-01
This report is intended to serve two purposes. The first is to present a survey of the theoretical background of the dynamic interaction between a non-viscid, compressible fluid and an elastic structure is presented. Section one presents a short survey of the application of the finite element method (FEM) to the area of fluid-structure-interaction (FSI). Section two describes the mathematical foundation of the structure and fluid with special emphasis on the fluid. The main steps in establishing the finite element (FE) equations for the fluid structure coupling are discussed in section three. The second purpose is to demonstrate the application of MSC/NASTRAN to the solution of FSI problems. Some specific topics, such as fluid structure analogy, acoustic absorption, and acoustic contribution analysis are described in section four. Section five deals with the organization of the acoustic procedure flowchart. Section six includes the most important information that a user needs for applying the acoustic procedure to practical FSI problems. Beginning with some rules concerning the FE modeling of the coupled system, the NASTRAN USER DECKs for the different steps are described. The goal of section seven is to demonstrate the use of the acoustic procedure with some examples. This demonstration includes an analytic verification of selected FE results. The analytical description considers only some aspects of FSI and is not intended to be mathematically complete. Finally, section 8 presents an application of the acoustic procedure to vehicle interior acoustic analysis with selected results.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.
2016-01-01
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619
5 CFR 532.215 - Establishments included in regular appropriated fund surveys.
Code of Federal Regulations, 2010 CFR
2010-01-01
... in surveys shall be selected under standard probability sample selection procedures. In areas with... establishment list drawn under statistical sampling procedures. [55 FR 46142, Nov. 1, 1990] ...
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Harvey, Matthew J; Mason, Nicholas J; McLean, Andrew; Rzepa, Henry S
2015-01-01
We describe three different procedures based on metadata standards for enabling automated retrieval of scientific data from digital repositories utilising the persistent identifier of the dataset with optional specification of the attributes of the data document such as filename or media type. The procedures are demonstrated using the JSmol molecular visualizer as a component of a web page and Avogadro as a stand-alone modelling program. We compare our methods for automated retrieval of data from a standards-compliant data repository with those currently in operation for a selection of existing molecular databases and repositories. Our methods illustrate the importance of adopting a standards-based approach of using metadata declarations to increase access to and discoverability of repository-based data. Graphical abstract.
Repetition priming in selective attention: A TVA analysis.
Ásgeirsson, Árni Gunnar; Kristjánsson, Árni; Bundesen, Claus
2015-09-01
Current behavior is influenced by events in the recent past. In visual attention, this is expressed in many variations of priming effects. Here, we investigate color priming in a brief exposure digit-recognition task. Observers performed a masked odd-one-out singleton recognition task where the target-color either repeated or changed between subsequent trials. Performance was measured by recognition accuracy over exposure durations. The purpose of the study was to replicate earlier findings of perceptual priming in brief displays and to model those results based on a Theory of Visual Attention (TVA; Bundesen, 1990). We tested 4 different definitions of a generic TVA-model and assessed their explanatory power. Our hypothesis was that priming effects could be explained by selective mechanisms, and that target-color repetitions would only affect the selectivity parameter (α) of our models. Repeating target colors enhanced performance for all 12 observers. As predicted, this was only true under conditions that required selection of a target among distractors, but not when a target was presented alone. Model fits by TVA were obtained with a trial-by-trial maximum likelihood estimation procedure that estimated 4-15 free parameters, depending on the particular model. We draw two main conclusions. Color priming can be modeled simply as a change in selectivity between conditions of repetition or swap of target color. Depending on the desired resolution of analysis; priming can accurately be modeled by a simple four parameter model, where VSTM capacity and spatial biases of attention are ignored, or more fine-grained by a 10 parameter model that takes these aspects into account. Copyright © 2015 Elsevier B.V. All rights reserved.
Coutinho, C C; Mercadante, M E Z; Jorge, A M; Paz, C C P; El Faro, L; Monteiro, F M
2015-10-30
The effect of selection for postweaning weight was evaluated within the growth curve parameters for both growth and carcass traits. Records of 2404 Nellore animals from three selection lines were analyzed: two selection lines for high postweaning weight, selection (NeS) and traditional (NeT); and a control line (NeC) in which animals were selected for postweaning weight close to the average. Body weight (BW), hip height (HH), rib eye area (REA), back fat thickness (BFT), and rump fat thickness (RFT) were measured and records collected from animals 8 to 20 (males) and 11 to 26 (females) months of age. The parameters A (asymptotic value) and k (growth rate) were estimated using the nonlinear model procedure of the Statistical Analysis System program, which included fixed effect of line (NeS, NeT, and NeC) in the model, with the objective to evaluate differences in the estimated parameters between lines. Selected animals (NeS and NeT) showed higher growth rates than control line animals (NeC) for all traits. Line effect on curves parameters was significant (P < 0.001) for BW, HH, and REA in males, and for BFT and RFT in females. Selection for postweaning weight was effective in altering growth curves, resulting in animals with higher growth potential.
Restoration of distorted depth maps calculated from stereo sequences
NASA Technical Reports Server (NTRS)
Damour, Kevin; Kaufman, Howard
1991-01-01
A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.
The evolution of climate. [climatic effects of polar wandering and continental drift
NASA Technical Reports Server (NTRS)
Donn, W. L.; Shaw, D.
1975-01-01
A quantitative evaluation is made of the climatic effects of polar wandering plus continental drift in order to determine wether this mechanism alone could explain the deterioration of climate that occurred from the warmth of Mesozoic time to the ice age conditions of the late Cenozoic. By way of procedure, to investigate the effect of the changing geography of the past on climate Adem's thermodynamic model was selected. The application of the model is discussed and preliminary results are given.
Constitutive modeling for isotropic materials (HOST)
NASA Technical Reports Server (NTRS)
Chan, Kwai S.; Lindholm, Ulric S.; Bodner, S. R.; Hill, Jeff T.; Weber, R. M.; Meyer, T. G.
1986-01-01
The results of the third year of work on a program which is part of the NASA Hot Section Technology program (HOST) are presented. The goals of this program are: (1) the development of unified constitutive models for rate dependent isotropic materials; and (2) the demonstration of the use of unified models in structural analyses of hot section components of gas turbine engines. The unified models selected for development and evaluation are those of Bodner-Partom and of Walker. A test procedure was developed for assisting the generation of a data base for the Bodner-Partom model using a relatively small number of specimens. This test procedure involved performing a tensile test at a temperature of interest that involves a succession of strain-rate changes. The results for B1900+Hf indicate that material constants related to hardening and thermal recovery can be obtained on the basis of such a procedure. Strain aging, thermal recovery, and unexpected material variations, however, preluded an accurate determination of the strain-rate sensitivity parameter is this exercise. The effects of casting grain size on the constitutive behavior of B1900+Hf were studied and no particular grain size effect was observed. A systematic procedure was also developed for determining the material constants in the Bodner-Partom model. Both the new test procedure and the method for determining material constants were applied to the alternate material, Mar-M247 . Test data including tensile, creep, cyclic and nonproportional biaxial (tension/torsion) loading were collected. Good correlations were obtained between the Bodner-Partom model and experiments. A literature survey was conducted to assess the effects of thermal history on the constitutive behavior of metals. Thermal history effects are expected to be present at temperature regimes where strain aging and change of microstructure are important. Possible modifications to the Bodner-Partom model to account for these effects are outlined. The use of a unified constitutive model for hot section component analyses was demonstrated by applying the Walker model and the MARC finite-element code to a B1900+Hf airfoil problem.
Identifying and Modeling Dynamic Preference Evolution in Multipurpose Water Resources Systems
NASA Astrophysics Data System (ADS)
Mason, E.; Giuliani, M.; Castelletti, A.; Amigoni, F.
2018-04-01
Multipurpose water systems are usually operated on a tradeoff of conflicting operating objectives. Under steady state climatic and socioeconomic conditions, such tradeoff is supposed to represent a fair and/or efficient preference. Extreme variability in external forcing might affect water operators' risk aversion and force a change in her/his preference. Properly accounting for these shifts is key to any rigorous retrospective assessment of the operator's behaviors, and to build descriptive models for projecting the future system evolution. In this study, we explore how the selection of different preferences is linked to variations in the external forcing. We argue that preference selection evolves according to recent, extreme variations in system performance: underperforming in one of the objectives pushes the preference toward the harmed objective. To test this assumption, we developed a rational procedure to simulate the operator's preference selection. We map this selection onto a multilateral negotiation, where multiple virtual agents independently optimize different objectives. The agents periodically negotiate a compromise policy for the operation of the system. Agents' attitudes in each negotiation step are determined by the recent system performance measured by the specific objective they maximize. We then propose a numerical model of preference dynamics that implements a concept from cognitive psychology, the availability bias. We test our modeling framework on a synthetic lake operated for flood control and water supply. Results show that our model successfully captures the operator's preference selection and dynamic evolution driven by extreme wet and dry situations.
Valinoti, Maddalena; Fabbri, Claudio; Turco, Dario; Mantovan, Roberto; Pasini, Antonio; Corsi, Cristiana
2018-01-01
Radiofrequency ablation (RFA) is an important and promising therapy for atrial fibrillation (AF) patients. Optimization of patient selection and the availability of an accurate anatomical guide could improve RFA success rate. In this study we propose a unified, fully automated approach to build a 3D patient-specific left atrium (LA) model including pulmonary veins (PVs) in order to provide an accurate anatomical guide during RFA and without PVs in order to characterize LA volumetry and support patient selection for AF ablation. Magnetic resonance data from twenty-six patients referred for AF RFA were processed applying an edge-based level set approach guided by a phase-based edge detector to obtain the 3D LA model with PVs. An automated technique based on the shape diameter function was designed and applied to remove PVs and compute LA volume. 3D LA models were qualitatively compared with 3D LA surfaces acquired during the ablation procedure. An expert radiologist manually traced the LA on MR images twice. LA surfaces from the automatic approach and manual tracing were compared by mean surface-to-surface distance. In addition, LA volumes were compared with volumes from manual segmentation by linear and Bland-Altman analyses. Qualitative comparison of 3D LA models showed several inaccuracies, in particular PVs reconstruction was not accurate and left atrial appendage was missing in the model obtained during RFA procedure. LA surfaces were very similar (mean surface-to-surface distance: 2.3±0.7mm). LA volumes were in excellent agreement (y=1.03x-1.4, r=0.99, bias=-1.37ml (-1.43%) SD=2.16ml (2.3%), mean percentage difference=1.3%±2.1%). Results showed the proposed 3D patient-specific LA model with PVs is able to better describe LA anatomy compared to models derived from the navigation system, thus potentially improving electrograms and voltage information location and reducing fluoroscopic time during RFA. Quantitative assessment of LA volume derived from our 3D LA model without PVs is also accurate and may provide important information for patient selection for RFA. Copyright © 2017 Elsevier Inc. All rights reserved.
The effect of prenatal care on birthweight: a full-information maximum likelihood approach.
Rous, Jeffrey J; Jewell, R Todd; Brown, Robert W
2004-03-01
This paper uses a full-information maximum likelihood estimation procedure, the Discrete Factor Method, to estimate the relationship between birthweight and prenatal care. This technique controls for the potential biases surrounding both the sample selection of the pregnancy-resolution decision and the endogeneity of prenatal care. In addition, we use the actual number of prenatal care visits; other studies have normally measured prenatal care as the month care is initiated. We estimate a birthweight production function using 1993 data from the US state of Texas. The results underscore the importance of correcting for estimation problems. Specifically, a model that does not control for sample selection and endogeneity overestimates the benefit of an additional visit for women who have relatively few visits. This overestimation may indicate 'positive fetal selection,' i.e., women who did not abort may have healthier babies. Also, a model that does not control for self-selection and endogenity predicts that past 17 visits, an additional visit leads to lower birthweight, while a model that corrects for these estimation problems predicts a positive effect for additional visits. This result shows the effect of mothers with less healthy fetuses making more prenatal care visits, known as 'adverse selection' in prenatal care. Copyright 2003 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Friedman, Brenda G.; And Others
The manual is intended to help students with language learning disabilities master the academic task of research paper writing. A seven-step procedure is advocated for students and their tutors: (1) select a workable topic, then limit and focus it; (2) use library references to identify sources from which to prepare a working bibliography; (3)…
Landfill gas control at military installations. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafer, R.A.; Renta-Babb, A.; Bandy, J.T.
1984-01-01
This report provides information useful to Army personnel responsible for recognizing and solving potential problems from gas generated by landfills. Information is provided on recognizing and gauging the magnitude of landfill gas problems; selecting appropriate gas control strategies, procedures, and equipment; use of computer modeling to predict gas production and migration and the success of gas control devices; and safety considerations.
ERIC Educational Resources Information Center
Bouchie, Mary Ellen; Vos, Robert
Vocational teachers for industrial and health occupations programs are usually recruited and selected directly from industry based upon their work experience, craft skills, and other technical criteria. This procedure provides schools with technically competent instructors who have little idea of how to teach. The certification requirements of…
ERIC Educational Resources Information Center
Yu, Bing; Hong, Guanglei
2012-01-01
This study uses simulation examples representing three types of treatment assignment mechanisms in data generation (the random intercept and slopes setting, the random intercept setting, and a third setting with a cluster-level treatment and an individual-level outcome) in order to determine optimal procedures for reducing bias and improving…
Hanford internal dosimetry program manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carbaugh, E.H.; Sula, M.J.; Bihl, D.E.
1989-10-01
This document describes the Hanford Internal Dosimetry program. Program Services include administrating the bioassay monitoring program, evaluating and documenting assessments of internal exposure and dose, ensuring that analytical laboratories conform to requirements, selecting and applying appropriate models and procedures for evaluating internal radionuclide deposition and the resulting dose, and technically guiding and supporting Hanford contractors in matters regarding internal dosimetry. 13 refs., 16 figs., 42 tabs.
EMC system test performance on Spacelab
NASA Astrophysics Data System (ADS)
Schwan, F.
1982-07-01
Electromagnetic compatibility testing of the Spacelab engineering model is discussed. Documentation, test procedures (including data monitoring and test configuration set up) and performance assessment approach are described. Equipment was assembled into selected representative flight configurations. The physical and functional interfaces between the subsystems were demonstrated within the integration and test sequence which culminated in the flyable configuration Long Module plus one Pallet.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.
DOT National Transportation Integrated Search
1985-10-01
This order contains qualification criteria and procedures for the selection and appointment of Designated Airworthiness Representatives (DAR's) to perform certain certification functions as representatives of the Administrator.
A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles
NASA Astrophysics Data System (ADS)
Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.
The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.
A multi-site study on medical school selection, performance, motivation and engagement.
Wouters, A; Croiset, G; Schripsema, N R; Cohen-Schotanus, J; Spaai, G W G; Hulsman, R L; Kusurkar, R A
2017-05-01
Medical schools seek ways to improve their admissions strategies, since the available methods prove to be suboptimal for selecting the best and most motivated students. In this multi-site cross-sectional questionnaire study, we examined the value of (different) selection procedures compared to a weighted lottery procedure, which includes direct admission based on top pre-university grade point averages (≥8 out of 10; top-pu-GPA). We also considered whether students had participated in selection, prior to being admitted through weighted lottery. Year-1 (pre-clinical) and Year-4 (clinical) students completed standard validated questionnaires measuring quality of motivation (Academic Self-regulation Questionnaire), strength of motivation (Strength of Motivation for Medical School-Revised) and engagement (Utrecht Work Engagement Scale-Student). Performance data comprised GPA and course credits in Year-1 and clerkship performance in Year-4. Regression analyses were performed. The response rate was 35% (387 Year-1 and 273 Year-4 students). Top-pu-GPA students outperformed selected students. Selected Year-1 students reported higher strength of motivation than top-pu-GPA students. Selected students did not outperform or show better quality of motivation and engagement than lottery-admitted students. Participation in selection was associated with higher engagement and better clerkship performance in Year-4. GPA, course credits and strength of motivation in Year-1 differed between students admitted through different selection procedures. Top-pu-GPA students perform best in the medical study. The few and small differences found raise questions about the added value of an extensive selection procedure compared to a weighted lottery procedure. Findings have to be interpreted with caution because of a low response rate and small group sizes.
ERIC Educational Resources Information Center
Brubaker, Harold A.
This study (1) describes student selection and retention procedures currently used by North Central Association colleges and universities which are accredited by the National Council for the Accreditation of Teacher Education, and (2) determines student selection and retention procedures which administrators of teacher education programs at the…
A Multi-Site Study on Medical School Selection, Performance, Motivation and Engagement
ERIC Educational Resources Information Center
Wouters, A.; Croiset, G.; Schripsema, N. R.; Cohen-Schotanus, J.; Spaai, G. W.; Hulsman, R. L.; Kusurkar, R. A.
2017-01-01
Medical schools seek ways to improve their admissions strategies, since the available methods prove to be suboptimal for selecting the best and most motivated students. In this multi-site cross-sectional questionnaire study, we examined the value of (different) selection procedures compared to a weighted lottery procedure, which includes direct…
ERIC Educational Resources Information Center
Adamson, Martin; And Others
Intended for use by curriculum committees or individuals charged with responsibility for the selection of provincially authorized learning resources, this document contains guidelines and procedures intended to serve as minimum standard requirements for the provincial evaluation and selection of learning resources. Learning resources are defined…
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
47 CFR 22.131 - Procedures for mutually exclusive applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... excluded by that grant, pursuant to § 1.945 of this chapter. (1) Selection methods. In selecting the... under § 1.945 of this chapter, either before or after employing selection procedures. (3) Type of filing... Commission may attempt to resolve the mutual exclusivity by facilitating a settlement between the applicants...
Additive Genetic Variability and the Bayesian Alphabet
Gianola, Daniel; de los Campos, Gustavo; Hill, William G.; Manfredi, Eduardo; Fernando, Rohan
2009-01-01
The use of all available molecular markers in statistical models for prediction of quantitative traits has led to what could be termed a genomic-assisted selection paradigm in animal and plant breeding. This article provides a critical review of some theoretical and statistical concepts in the context of genomic-assisted genetic evaluation of animals and crops. First, relationships between the (Bayesian) variance of marker effects in some regression models and additive genetic variance are examined under standard assumptions. Second, the connection between marker genotypes and resemblance between relatives is explored, and linkages between a marker-based model and the infinitesimal model are reviewed. Third, issues associated with the use of Bayesian models for marker-assisted selection, with a focus on the role of the priors, are examined from a theoretical angle. The sensitivity of a Bayesian specification that has been proposed (called “Bayes A”) with respect to priors is illustrated with a simulation. Methods that can solve potential shortcomings of some of these Bayesian regression procedures are discussed briefly. PMID:19620397
Mazerolle, M.J.
2006-01-01
In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.
Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf
2016-07-01
Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
Self-motion perception: assessment by computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Harm, D. L.; Sandoz, G. R.; Skinner, N. C.
1998-01-01
The goal of this research is more precise description of adaptation to sensory rearrangements, including microgravity, by development of improved procedures for assessing spatial orientation perception. Thirty-six subjects reported perceived self-motion following exposure to complex inertial-visual motion. Twelve subjects were assigned to each of 3 perceptual reporting procedures: (a) animation movie selection, (b) written report selection and (c) verbal report generation. The question addressed was: do reports produced by these procedures differ with respect to complexity and reliability? Following repeated (within-day and across-day) exposures to 4 different "motion profiles," subjects either (a) selected movies presented on a laptop computer, or (b) selected written descriptions from a booklet, or (c) generated self-motion verbal descriptions that corresponded most closely with their motion experience. One "complexity" and 2 reliability "scores" were calculated. Contrary to expectations, reliability and complexity scores were essentially equivalent for the animation movie selection and written report selection procedures. Verbal report generation subjects exhibited less complexity than did subjects in the other conditions and their reports were often ambiguous. The results suggest that, when selecting from carefully written descriptions and following appropriate training, people may be better able to describe their self-motion experience with words than is usually believed.
Application of GIS-based Procedure on Slopeland Use Classification and Identification
NASA Astrophysics Data System (ADS)
KU, L. C.; LI, M. C.
2016-12-01
In Taiwan, the "Slopeland Conservation and Utilization Act" regulates the management of the slopelands. It categorizes the slopeland into land suitable for agricultural or animal husbandry, land suitable for forestry and land for enhanced conservation, according to the environmental factors of average slope, effective soil depth, soil erosion and parental rock. Traditionally, investigations of environmental factors require cost-effective field works. It has been confronted with many practical issues such as non-evaluated cadastral parcels, evaluation results depending on expert's opinion, difficulties in field measurement and judgment, and time consuming. This study aimed to develop a GIS-based procedure involved in the acceleration of slopeland use classification and quality improvement. First, the environmental factors of slopelands were analyzed by GIS and SPSS software. The analysis involved with the digital elevation model (DEM), soil depth map, land use map and satellite images. Second, 5% of the analyzed slopelands were selected to perform the site investigations and correct the results of classification. Finally, a 2nd examination was involved by randomly selected 2% of the analyzed slopelands to perform the accuracy evaluation. It was showed the developed procedure is effective in slopeland use classification and identification. Keywords: Slopeland Use Classification, GIS, Management
Infrared radiative energy transfer in gaseous systems
NASA Technical Reports Server (NTRS)
Tiwari, Surendra N.
1991-01-01
Analyses and numerical procedures are presented to investigate the radiative interactions in various energy transfer processes in gaseous systems. Both gray and non-gray radiative formulations for absorption and emission by molecular gases are presented. The gray gas formulations are based on the Planck mean absorption coefficient and the non-gray formulations are based on the wide band model correlations for molecular absorption. Various relations for the radiative flux and divergence of radiative flux are developed. These are useful for different flow conditions and physical problems. Specific plans for obtaining extensive results for different cases are presented. The procedure developed was applied to several realistic problems. Results of selected studies are presented.
Characterization of PMR polyimide resin and prepreg
NASA Technical Reports Server (NTRS)
Lindenmeyer, P. H.; Sheppard, C. H.
1984-01-01
Procedures for the chemical characterization of PMR-15 resin solutions and graphite-reinforced prepregs were developed, and a chemical data base was established. In addition, a basic understanding of PMR-15 resin chemistry was gained; this was translated into effective processing procedures for the production of high quality graphite composites. During the program the PMR monomers and selected model compounds representative of postulated PMR-15 solution chemistry were acquired and characterized. Based on these data, a baseline PMR-15 resin was formulated and evaluated for processing characteristics and composite properties. Commercially available PMR-15 resins were then obtained and chemically characterized. Composite panels were fabricated and evaluated.
Viscous remanent magnetization model for the Broken Ridge satellite magnetic anomaly
NASA Technical Reports Server (NTRS)
Johnson, B. D.
1985-01-01
An equivalent source model solution of the satellite magnetic field over Australia obtained by Mayhew et al. (1980) showed that the satellite anomalies could be related to geological features in Australia. When the processing and selection of the Magsat data over the Australian region had progressed to the point where interpretation procedures could be initiated, it was decided to start by attempting to model the Broken Ridge satellite anomaly, which represents one of the very few relatively isolated anomalies in the Magsat maps, with an unambiguous source region. Attention is given to details concerning the Broken Ridge satellite magnetic anomaly, the modeling method used, the Broken Ridge models, modeling results, and characteristics of magnetization.
Basis Selection for Wavelet Regression
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)
1998-01-01
A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.
Lawler, Frank H; Wilson, Frank R; Smith, G Keith; Mitchell, Lynn V
2017-12-01
Healthcare reimbursement, which has traditionally been based on the quantity of services delivered, is currently moving toward value-based reimbursement-a system that addresses the quantity, quality, and cost of services. One such arrangement has been the evolution of bundled payments for a specific procedure or for an episode of care, paid prospectively or through post-hoc reconciliation. To evaluate the impact of instituting bundled payments that incorporate facility charges, physician fees, and all ancillary charges by the State of Oklahoma HealthChoice public employee insurance plan. From January 1 through December 31, 2016, HealthChoice, a large, government-sponsored Oklahoma health plan, implemented a voluntary, prospective, bundled payment system with network facilities, called Select. The Select program allows members at the time of certification of the services to opt to use participating facilities for specified services at a bundled rate, with deductible and coinsurance covered by the health plan. That is, the program allows any plan member to choose either a participating Select facility with no out-of-pocket costs or standard benefits at a participating network facility. During 2016, more than 7900 procedures were performed for 5907 patients who chose the Select arrangement (also designated as the intervention group). The most common outpatient Select procedures were for cardiology, colonoscopy, and magnetic resonance imaging scans. The most common inpatient procedures for Select-covered patients were in 6 diagnosis-related groups covering spinal fusions, joint replacement surgeries, and percutaneous coronary artery stenting. The allowable costs were similar for bundled procedures at ambulatory surgery centers and at outpatient hospital facilities; the allowable costs for patients not in the Select program (mean, $813) were lower at ambulatory surgery centers than at outpatient hospital departments (mean, $3086) because of differences in case mix. Patients in the Select system who had outpatient procedures had significantly fewer subsequent claims than those who were not in Select for hospitalization (1.7% vs 2.5%, respectively) and emergency department visits (4.4% vs 11.5%, respectively) in the 30 days postprocedure. Quality measures (eg, wound infection and reoperation) were similar for patients who were and were not in the Select group and had procedures. Surgical complication (ie, return to surgery) rates were higher for the Select group. The Select program demonstrated promising results during its first year of operation, suggesting that prospective bundled payment arrangements can be implemented successfully. Further research on reimbursement mechanisms, that is, how to pay physicians and facilities, and quality of outcomes is needed, especially with respect to which procedures are most suitable for this payment arrangement.
Lawler, Frank H.; Wilson, Frank R.; Smith, G. Keith; Mitchell, Lynn V.
2017-01-01
Background Healthcare reimbursement, which has traditionally been based on the quantity of services delivered, is currently moving toward value-based reimbursement—a system that addresses the quantity, quality, and cost of services. One such arrangement has been the evolution of bundled payments for a specific procedure or for an episode of care, paid prospectively or through post-hoc reconciliation. Objective To evaluate the impact of instituting bundled payments that incorporate facility charges, physician fees, and all ancillary charges by the State of Oklahoma HealthChoice public employee insurance plan. Method From January 1 through December 31, 2016, HealthChoice, a large, government-sponsored Oklahoma health plan, implemented a voluntary, prospective, bundled payment system with network facilities, called Select. The Select program allows members at the time of certification of the services to opt to use participating facilities for specified services at a bundled rate, with deductible and coinsurance covered by the health plan. That is, the program allows any plan member to choose either a participating Select facility with no out-of-pocket costs or standard benefits at a participating network facility. Results During 2016, more than 7900 procedures were performed for 5907 patients who chose the Select arrangement (also designated as the intervention group). The most common outpatient Select procedures were for cardiology, colonoscopy, and magnetic resonance imaging scans. The most common inpatient procedures for Select-covered patients were in 6 diagnosis-related groups covering spinal fusions, joint replacement surgeries, and percutaneous coronary artery stenting. The allowable costs were similar for bundled procedures at ambulatory surgery centers and at outpatient hospital facilities; the allowable costs for patients not in the Select program (mean, $813) were lower at ambulatory surgery centers than at outpatient hospital departments (mean, $3086) because of differences in case mix. Patients in the Select system who had outpatient procedures had significantly fewer subsequent claims than those who were not in Select for hospitalization (1.7% vs 2.5%, respectively) and emergency department visits (4.4% vs 11.5%, respectively) in the 30 days postprocedure. Quality measures (eg, wound infection and reoperation) were similar for patients who were and were not in the Select group and had procedures. Surgical complication (ie, return to surgery) rates were higher for the Select group. Conclusion The Select program demonstrated promising results during its first year of operation, suggesting that prospective bundled payment arrangements can be implemented successfully. Further research on reimbursement mechanisms, that is, how to pay physicians and facilities, and quality of outcomes is needed, especially with respect to which procedures are most suitable for this payment arrangement. PMID:29403570
Moore, M.K.; Cicnjak-Chubbs, L.; Gates, R.J.
1994-01-01
A selective enrichment procedure, using two new selective media, was developed to isolate Pasteurella multocida from wild birds and environmental samples. These media were developed by testing 15 selective agents with six isolates of P. multocida from wild avian origin and seven other bacteria representing genera frequently found in environmental and avian samples. The resulting media—Pasteurella multocida selective enrichment broth and Pasteurella multocida selective agar—consisted of a blood agar medium at pH 10 containing gentamicin, potassium tellurite, and amphotericin B. Media were tested to determine: 1) selectivity when attempting isolation from pond water and avian carcasses, 2) sensitivity for detection of low numbers of P. multocida from pure and mixed cultures, 3) host range specificity of the media, and 4) performance compared with standard blood agar. With the new selective enrichment procedure, P. multocida was isolated from inoculated (60 organisms/ml) pond water 84% of the time, whereas when standard blood agar was used, the recovery rate was 0%.
Code of Federal Regulations, 2010 CFR
2010-10-01
... lines that are likely to have high or low theft rates. 542.1 Section 542.1 Transportation Other... OF TRANSPORTATION PROCEDURES FOR SELECTING LIGHT DUTY TRUCK LINES TO BE COVERED BY THE THEFT... or low theft rates. (a) Scope. This section sets forth the procedures for motor vehicle manufacturers...
ERIC Educational Resources Information Center
Tarbox, Jonathan; Schiff, Averil; Najdowski, Adel C.
2010-01-01
Fool selectivity is characterized by the consumption of an inadequate variety of foods. The effectiveness of behavioral treatment procedures, particularly nonremoval of the spoon, is well validated by research. The role of parents in the treatment of feeding disorders and the feasibility of behavioral procedures for parent implementation in the…
Antenna Linear-Quadratic-Gaussian (LQG) Ccontrollers: Properties, Limits of Performance, and Tuning
NASA Technical Reports Server (NTRS)
Gawronski, Wodek K.
2004-01-01
The LQG controllers significantly improve antenna tracking precision, but their tuning is a trial-and-error process. A control engineer has two tools to tune an LQG controller: the choice of coordinate system of the controller, and the selection of weights of the LQG performance index. The paper selects the coordinates of the open-loop model that simplify the shaping of the closed-loop performance. and analyzes the impact of thc weights on the antenna closed-loop bandwidth, disturbance rejection properties, and antenna acceleration. Finally, it presents the LQG controller tuning procedure that rationally shapes the closed-loop performance.
Kasten, Chelsea R.; Blasingame, Shelby N.; Boehm, Stephen L.
2014-01-01
The GABAB receptor agonist baclofen has been studied extensively in preclinical models of alcohol use disorders, yet results on its efficacy have been uncertain. Racemic baclofen, which is used clinically, can be broken down into separate enantiomers of the drug. Baclofen has been shown to produce enantioselective effects in behavioral assays including those modeling reflexive and sexual behavior. The current studies sought to characterize the enantioselective effects of baclofen in two separate models of ethanol consumption. The first was a Drinking-in-the-Dark procedure that provides “binge-like” ethanol access to mice by restricting access to a two hour period, three hours into the dark cycle. The second was a two-bottle choice procedure that utilized selectively bred High Alcohol Preferring 1 (HAP1) mice to model chronic ethanol access. HAP1 mice are selectively bred to consume pharmacologically relevant amounts of ethanol in a 24-hour two-bottle choice paradigm. The results showed that baclofen yields enantioselective effects on ethanol intake in both models, and that these effects are bidirectional. Total ethanol intake was decreased by R(+)- baclofen, while total intake was increased by S(-)-baclofen in the binge-like and chronic drinking models. Whereas overall binge-like saccharin intake was significantly reduced by R(+)- baclofen, chronic intake was not significantly altered. S(-)- baclofen did not significantly alter saccharin intake. Neither enantiomer significantly affected locomotion during binge-like reinforcer consumption. Collectively, these results demonstrate that baclofen produces enantioselective effects on ethanol consumption. More importantly, the modulation of consumption is bidirectional. The opposing enantioselective effects may explain some of the variance seen in published baclofen literature. PMID:25557834
Kasten, Chelsea R; Blasingame, Shelby N; Boehm, Stephen L
2015-02-01
The GABAB receptor agonist baclofen has been studied extensively in preclinical models of alcohol-use disorders, yet results on its efficacy have been uncertain. Racemic baclofen, which is used clinically, can be broken down into separate enantiomers of the drug. Baclofen has been shown to produce enantioselective effects in behavioral assays, including those modeling reflexive and sexual behavior. The current studies sought to characterize the enantioselective effects of baclofen in two separate models of ethanol consumption. The first was a Drinking-in-the-Dark procedure that provides "binge-like" ethanol access to mice by restricting access to a 2-h period, 3 h into the dark cycle. The second was a two-bottle choice procedure that utilized selectively bred High Alcohol Preferring 1 (HAP1) mice to model chronic ethanol access. HAP1 mice are selectively bred to consume pharmacologically relevant amounts of ethanol in a 24-h two-bottle choice paradigm. The results showed that baclofen yields enantioselective effects on ethanol intake in both models, and that these effects are bidirectional. Total ethanol intake was decreased by R(+)-baclofen, while total intake was increased by S(-)-baclofen in the binge-like and chronic drinking models. Whereas overall binge-like saccharin intake was significantly reduced by R(+)-baclofen, chronic intake was not significantly altered. S(-)-baclofen did not significantly alter saccharin intake. Neither enantiomer significantly affected locomotion during binge-like reinforcer consumption. Collectively, these results demonstrate that baclofen produces enantioselective effects on ethanol consumption. More importantly, the modulation of consumption is bidirectional. The opposing enantioselective effects may explain some of the variance seen in published baclofen literature. Copyright © 2015 Elsevier Inc. All rights reserved.
Human salmonellosis: estimation of dose-illness from outbreak data.
Bollaerts, Kaatje; Aerts, Marc; Faes, Christel; Grijspeerdt, Koen; Dewulf, Jeroen; Mintiens, Koen
2008-04-01
The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al. Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
The Big-Five factor structure as an integrative framework: an analysis of Clarke's AVA model.
Goldberg, L R; Sweeney, D; Merenda, P F; Hughes, J E
1996-06-01
Using a large (N = 3,629) sample of participants selected to be representative of U.S. working adults in the year 2,000, we provide links between the constructs in 2 personality models that have been derived from quite different rationales. We demonstrate the use of a novel procedure for providing orthogonal Big-Five factor scores and use those scores to analyze the scales of the Activity Vector Analysis (AVA). We discuss the implications of our many findings both for the science of personality assessment and for future research using the AVA model.
Effective degrees of freedom: a flawed metaphor
Janson, Lucas; Fithian, William; Hastie, Trevor J.
2015-01-01
Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
Designing Recycled Hot Mix Asphalt Mixtures Using Superpave Technology
DOT National Transportation Integrated Search
1997-01-01
Mix design procedures for recycled asphalt pavements require the selection of : virgin asphalt binder or recycling agent. This research project was undertaken : to develop a procedure for selecting the performance grade (PG) of virgin : asphalt binde...
ERIC Educational Resources Information Center
Nielsen, Earl T.
This monograph is designed to assist administrative school personnel in selecting, training, and retaining the best qualified instructional aides available. It covers five areas: (1) employment procedures--outlining important points under recruitment, applications, examinations, interviews, and selection; (2) payroll procedures--outlining how to…
NASA Astrophysics Data System (ADS)
Çakır, Süleyman
2017-10-01
In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.
Gold, Matthew G.; Fowler, Douglas M.; Means, Christopher K.; Pawson, Catherine T.; Stephany, Jason J.; Langeberg, Lorene K.; Fields, Stanley; Scott, John D.
2013-01-01
PKA is retained within distinct subcellular environments by the association of its regulatory type II (RII) subunits with A-kinase anchoring proteins (AKAPs). Conventional reagents that universally disrupt PKA anchoring are patterned after a conserved AKAP motif. We introduce a phage selection procedure that exploits high-resolution structural information to engineer RII mutants that are selective for a particular AKAP. Selective RII (RSelect) sequences were obtained for eight AKAPs following competitive selection screening. Biochemical and cell-based experiments validated the efficacy of RSelect proteins for AKAP2 and AKAP18. These engineered proteins represent a new class of reagents that can be used to dissect the contributions of different AKAP-targeted pools of PKA. Molecular modeling and high-throughput sequencing analyses revealed the molecular basis of AKAP-selective interactions and shed new light on native RII-AKAP interactions. We propose that this structure-directed evolution strategy might be generally applicable for the investigation of other protein interaction surfaces. PMID:23625929
Tian, Xin; Xin, Mingyuan; Luo, Jian; Liu, Mingyao; Jiang, Zhenran
2017-02-01
The selection of relevant genes for breast cancer metastasis is critical for the treatment and prognosis of cancer patients. Although much effort has been devoted to the gene selection procedures by use of different statistical analysis methods or computational techniques, the interpretation of the variables in the resulting survival models has been limited so far. This article proposes a new Random Forest (RF)-based algorithm to identify important variables highly related with breast cancer metastasis, which is based on the important scores of two variable selection algorithms, including the mean decrease Gini (MDG) criteria of Random Forest and the GeneRank algorithm with protein-protein interaction (PPI) information. The new gene selection algorithm can be called PPIRF. The improved prediction accuracy fully illustrated the reliability and high interpretability of gene list selected by the PPIRF approach.
NASA Astrophysics Data System (ADS)
Nardi, F.; Grimaldi, S.; Petroselli, A.
2012-12-01
Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.
Assessing the fit of site-occupancy models
MacKenzie, D.I.; Bailey, L.L.
2004-01-01
Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.
Thompson, Laura; Exline, Matthew; Leung, Cynthia G; Way, David P; Clinchot, Daniel; Bahner, David P; Khandelwal, Sorabh
2016-01-01
Background Procedural skills training is a critical component of medical education, but is often lacking in standard clinical curricula. We describe a unique immersive procedural skills curriculum for medical students, designed and taught primarily by emergency medicine faculty at The Ohio State University College of Medicine. Objectives The primary educational objective of this program was to formally introduce medical students to clinical procedures thought to be important for success in residency. The immersion strategy (teaching numerous procedures over a 7-day period) was intended to complement the student's education on third-year core clinical clerkships. Program design The course introduced 27 skills over 7 days. Teaching and learning methods included lecture, prereading, videos, task trainers, peer teaching, and procedures practice on cadavers. In year 4 of the program, a peer-team teaching model was adopted. We analyzed program evaluation data over time. Impact Students valued the selection of procedures covered by the course and felt that it helped prepare them for residency (97%). The highest rated activities were the cadaver lab and the advanced cardiac life support (97 and 93% positive endorsement, respectively). Lectures were less well received (73% positive endorsement), but improved over time. The transition to peer-team teaching resulted in improved student ratings of course activities (p<0.001). Conclusion A dedicated procedural skills curriculum successfully supplemented the training medical students received in the clinical setting. Students appreciated hands-on activities and practice. The peer-teaching model improved course evaluations by students, which implies that this was an effective teaching method for adult learners. This course was recently expanded and restructured to place the learning closer to the clinical settings in which skills are applied.
Thompson, Laura; Exline, Matthew; Leung, Cynthia G; Way, David P; Clinchot, Daniel; Bahner, David P; Khandelwal, Sorabh
2016-01-01
Procedural skills training is a critical component of medical education, but is often lacking in standard clinical curricula. We describe a unique immersive procedural skills curriculum for medical students, designed and taught primarily by emergency medicine faculty at The Ohio State University College of Medicine. The primary educational objective of this program was to formally introduce medical students to clinical procedures thought to be important for success in residency. The immersion strategy (teaching numerous procedures over a 7-day period) was intended to complement the student's education on third-year core clinical clerkships. The course introduced 27 skills over 7 days. Teaching and learning methods included lecture, prereading, videos, task trainers, peer teaching, and procedures practice on cadavers. In year 4 of the program, a peer-team teaching model was adopted. We analyzed program evaluation data over time. Students valued the selection of procedures covered by the course and felt that it helped prepare them for residency (97%). The highest rated activities were the cadaver lab and the advanced cardiac life support (97 and 93% positive endorsement, respectively). Lectures were less well received (73% positive endorsement), but improved over time. The transition to peer-team teaching resulted in improved student ratings of course activities (p<0.001). A dedicated procedural skills curriculum successfully supplemented the training medical students received in the clinical setting. Students appreciated hands-on activities and practice. The peer-teaching model improved course evaluations by students, which implies that this was an effective teaching method for adult learners. This course was recently expanded and restructured to place the learning closer to the clinical settings in which skills are applied.
Strategy Developed for Selecting Optimal Sensors for Monitoring Engine Health
NASA Technical Reports Server (NTRS)
2004-01-01
Sensor indications during rocket engine operation are the primary means of assessing engine performance and health. Effective selection and location of sensors in the operating engine environment enables accurate real-time condition monitoring and rapid engine controller response to mitigate critical fault conditions. These capabilities are crucial to ensure crew safety and mission success. Effective sensor selection also facilitates postflight condition assessment, which contributes to efficient engine maintenance and reduced operating costs. Under the Next Generation Launch Technology program, the NASA Glenn Research Center, in partnership with Rocketdyne Propulsion and Power, has developed a model-based procedure for systematically selecting an optimal sensor suite for assessing rocket engine system health. This optimization process is termed the systematic sensor selection strategy. Engine health management (EHM) systems generally employ multiple diagnostic procedures including data validation, anomaly detection, fault-isolation, and information fusion. The effectiveness of each diagnostic component is affected by the quality, availability, and compatibility of sensor data. Therefore systematic sensor selection is an enabling technology for EHM. Information in three categories is required by the systematic sensor selection strategy. The first category consists of targeted engine fault information; including the description and estimated risk-reduction factor for each identified fault. Risk-reduction factors are used to define and rank the potential merit of timely fault diagnoses. The second category is composed of candidate sensor information; including type, location, and estimated variance in normal operation. The final category includes the definition of fault scenarios characteristic of each targeted engine fault. These scenarios are defined in terms of engine model hardware parameters. Values of these parameters define engine simulations that generate expected sensor values for targeted fault scenarios. Taken together, this information provides an efficient condensation of the engineering experience and engine flow physics needed for sensor selection. The systematic sensor selection strategy is composed of three primary algorithms. The core of the selection process is a genetic algorithm that iteratively improves a defined quality measure of selected sensor suites. A merit algorithm is employed to compute the quality measure for each test sensor suite presented by the selection process. The quality measure is based on the fidelity of fault detection and the level of fault source discrimination provided by the test sensor suite. An inverse engine model, whose function is to derive hardware performance parameters from sensor data, is an integral part of the merit algorithm. The final component is a statistical evaluation algorithm that characterizes the impact of interference effects, such as control-induced sensor variation and sensor noise, on the probability of fault detection and isolation for optimal and near-optimal sensor suites.
A Study of Regional Waveform Calibration in the Eastern Mediterranean Region.
NASA Astrophysics Data System (ADS)
di Luccio, F.; Pino, A.; Thio, H.
2002-12-01
We modeled Pnl phases from several moderate magnitude events in the eastern Mediterranean to test methods and to develop path calibrations for source determination. The study region spanning from the eastern part of the Hellenic arc to the eastern Anatolian fault is mostly interested by moderate earthquakes, that can produce relevant damages. The selected area consists of several tectonic environment, which produces increased level of difficulty in waveform modeling. The results of this study are useful for the analysis of regional seismicity and for seismic hazard as well, in particular because very few broadband seismic stations are available in the selected area. The obtained velocity model gives a 30 km crustal tickness and low upper mantle velocities. The applied inversion procedure to determine the source mechanism has been successful, also in terms of discrimination of depth, for the entire range of selected paths. We conclude that using the true calibration of the seismic structure and high quality broadband data, it is possible to determine the seismic source in terms of mechanism, even with a single station.
The photon identification loophole in EPRB experiments: computer models with single-wing selection
NASA Astrophysics Data System (ADS)
De Raedt, Hans; Michielsen, Kristel; Hess, Karl
2017-11-01
Recent Einstein-Podolsky-Rosen-Bohm experiments [M. Giustina et al. Phys. Rev. Lett. 115, 250401 (2015); L. K. Shalm et al. Phys. Rev. Lett. 115, 250402 (2015)] that claim to be loophole free are scrutinized. The combination of a digital computer and discrete-event simulation is used to construct a minimal but faithful model of the most perfected realization of these laboratory experiments. In contrast to prior simulations, all photon selections are strictly made, as they are in the actual experiments, at the local station and no other "post-selection" is involved. The simulation results demonstrate that a manifestly non-quantum model that identifies photons in the same local manner as in these experiments can produce correlations that are in excellent agreement with those of the quantum theoretical description of the corresponding thought experiment, in conflict with Bell's theorem which states that this is impossible. The failure of Bell's theorem is possible because of our recognition of the photon identification loophole. Such identification measurement-procedures are necessarily included in all actual experiments but are not included in the theory of Bell and his followers.
Estimation of effective hydrologic properties of soils from observations of vegetation density
NASA Technical Reports Server (NTRS)
Tellers, T. E.; Eagleson, P. S.
1980-01-01
A one-dimensional model of the annual water balance is reviewed. Improvements are made in the method of calculating the bare soil component of evaporation, and in the way surface retention is handled. A natural selection hypothesis, which specifies the equilibrium vegetation density for a given, water limited, climate soil system, is verified through comparisons with observed data. Comparison of CDF's of annual basin yield derived using these soil properties with observed CDF's provides verification of the soil-selection procedure. This method of parameterization of the land surface is useful with global circulation models, enabling them to account for both the nonlinearity in the relationship between soil moisture flux and soil moisture concentration, and the variability of soil properties from place to place over the Earth's surface.
Discovery and first models of the quadruply lensed quasar SDSS J1433+6007
NASA Astrophysics Data System (ADS)
Agnello, Adriano; Grillo, Claudio; Jones, Tucker; Treu, Tommaso; Bonamigo, Mario; Suyu, Sherry H.
2018-03-01
We report the discovery of the quadruply lensed quasar SDSS J1433+6007 (RA = 14:33:22.8, Dec. = +60:07:13.44), mined in the SDSS DR12 photometric catalogues using a novel outlier-selection technique, without prior spectroscopic or ultraviolet excess information. Discovery data obtained at the Nordic Optical Telescope (La Palma) show nearly identical quasar spectra at zs = 2.737 ± 0.003 and four quasar images in a fold configuration, one of which sits on a blue arc, with maximum separation 3.6 arcsec. The deflector redshift is zl = 0.407 ± 0.002, from Keck-ESI spectra. We describe the selection procedure, discovery and follow-up, image positions and BVRi magnitudes, and first results and forecasts from lens model fit to the relative image positions.
Aerosol-type retrieval and uncertainty quantification from OMI data
NASA Astrophysics Data System (ADS)
Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna
2017-11-01
We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model selection. The posterior probability distribution can provide a comprehensive characterisation of the uncertainty in this kind of problem for aerosol-type selection. As a result, the proposed method can account for the model error and also include the model selection uncertainty in the total uncertainty budget.
Computational Hemodynamics Involving Artificial Devices
NASA Technical Reports Server (NTRS)
Kwak, Dochan; Kiris, Cetin; Feiereisen, William (Technical Monitor)
2001-01-01
This paper reports the progress being made towards developing complete blood flow simulation capability in human, especially, in the presence of artificial devices such as valves and ventricular assist devices. Devices modeling poses unique challenges different from computing the blood flow in natural hearts and arteries. There are many elements needed such as flow solvers, geometry modeling including flexible walls, moving boundary procedures and physiological characterization of blood. As a first step, computational technology developed for aerospace applications was extended in the recent past to the analysis and development of mechanical devices. The blood flow in these devices is practically incompressible and Newtonian, and thus various incompressible Navier-Stokes solution procedures can be selected depending on the choice of formulations, variables and numerical schemes. Two primitive variable formulations used are discussed as well as the overset grid approach to handle complex moving geometry. This procedure has been applied to several artificial devices. Among these, recent progress made in developing DeBakey axial flow blood pump will be presented from computational point of view. Computational and clinical issues will be discussed in detail as well as additional work needed.
IMBLMS phase B4, additional tasks 5.0. Microbial identification system
NASA Technical Reports Server (NTRS)
1971-01-01
A laboratory study was undertaken to provide simplified procedures leading to the presumptive identification (I/D) of defined microorganisms on-board an orbiting spacecraft. Identifications were to be initiated by nonprofessional bacteriologists, (crew members) on a contingency basis only. Key objectives/constraints for this investigation were as follows:(1) I/D procedures based on limited, defined diagnostic tests, (2) testing oriented about ten selected microorganisms, (3) provide for definitive I/D key and procedures per selected organism, (4) define possible occurrences of false positives for the resulting I/D key by search of the appropriate literature, and (5) evaluation of the I/D key and procedure through a limited field trial on randomly selected subjects using the I/D key.
Code of Federal Regulations, 2011 CFR
2011-07-01
... opportunities of members of any race, sex, or ethnic group will be considered to be discriminatory and... alternative methods of using the selection procedure which have as little adverse impact as possible, to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... opportunities of members of any race, sex, or ethnic group will be considered to be discriminatory and... alternative methods of using the selection procedure which have as little adverse impact as possible, to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... opportunities of members of any race, sex, or ethnic group will be considered to be discriminatory and... alternative methods of using the selection procedure which have as little adverse impact as possible, to...
Code of Federal Regulations, 2012 CFR
2012-07-01
... opportunities of members of any race, sex, or ethnic group will be considered to be discriminatory and... alternative methods of using the selection procedure which have as little adverse impact as possible, to...
Impacts of Climate Policy on Regional Air Quality, Health, and Air Quality Regulatory Procedures
NASA Astrophysics Data System (ADS)
Thompson, T. M.; Selin, N. E.
2011-12-01
Both the changing climate, and the policy implemented to address climate change can impact regional air quality. We evaluate the impacts of potential selected climate policies on modeled regional air quality with respect to national pollution standards, human health and the sensitivity of health uncertainty ranges. To assess changes in air quality due to climate policy, we couple output from a regional computable general equilibrium economic model (the US Regional Energy Policy [USREP] model), with a regional air quality model (the Comprehensive Air Quality Model with Extensions [CAMx]). USREP uses economic variables to determine how potential future U.S. climate policy would change emissions of regional pollutants (CO, VOC, NOx, SO2, NH3, black carbon, and organic carbon) from ten emissions-heavy sectors of the economy (electricity, coal, gas, crude oil, refined oil, energy intensive industry, other industry, service, agriculture, and transportation [light duty and heavy duty]). Changes in emissions are then modeled using CAMx to determine the impact on air quality in several cities in the Northeast US. We first calculate the impact of climate policy by using regulatory procedures used to show attainment with National Ambient Air Quality Standards (NAAQS) for ozone and particulate matter. Building on previous work, we compare those results with the calculated results and uncertainties associated with human health impacts due to climate policy. This work addresses a potential disconnect between NAAQS regulatory procedures and the cost/benefit analysis required for and by the Clean Air Act.
ERIC Educational Resources Information Center
Grunes, Paul; Gudmundsson, Amanda; Irmer, Bernd
2014-01-01
Researchers have found that transformational leadership is related to positive outcomes in educational institutions. Hence, it is important to explore constructs that may predict leadership style in order to identify potential transformational leaders in assessment and selection procedures. Several studies in non-educational settings have found…
1992-09-01
PI) 297 S S S S 15 Polyamideimide (PAI) 297 S S S S 14 Polyamide 6:6 (PA 6:6) 297 S S S S 35 Perfluoroakloxyethylene ( PFA ) 297 S S S S 42 Phenol...Procedures for the Measurement of Vapor Sorption Followed by Desorption and Comparisons with Polymer Cohesion Parameter and Polymer Coil Expansion Values
ERIC Educational Resources Information Center
Compton, Donald L.; Fuchs, Douglas; Fuchs, Lynn S.; Bryant, Joan D.
2006-01-01
Response to intervention (RTI) models for identifying learning disabilities rely on the accurate identification of children who, without Tier 2 tutoring, would develop reading disability (RD). This study examined 2 questions concerning the use of 1st-grade data to predict future RD: (1) Does adding initial word identification fluency (WIF) and 5…
the SRI program GAMUT , which is a simulation covering much the same ground as the STS-2 package but with a great reduction in the level of detail...that is considered. It provides the means of rapidly and cheaply changing the input conditions and operating procedures used in the simulation. Selected preliminary results of the GAMUT model are given.
Von Guerard, Paul; Weiss, W.B.
1995-01-01
The U.S. Environmental Protection Agency requires that municipalities that have a population of 100,000 or greater obtain National Pollutant Discharge Elimination System permits to characterize the quality of their storm runoff. In 1992, the U.S. Geological Survey, in cooperation with the Colorado Springs City Engineering Division, began a study to characterize the water quality of storm runoff and to evaluate procedures for the estimation of storm-runoff loads, volume and event-mean concentrations for selected properties and constituents. Precipitation, streamflow, and water-quality data were collected during 1992 at five sites in Colorado Springs. Thirty-five samples were collected, seven at each of the five sites. At each site, three samples were collected for permitting purposes; two of the samples were collected during rainfall runoff, and one sample was collected during snowmelt runoff. Four additional samples were collected at each site to obtain a large enough sample size to estimate storm-runoff loads, volume, and event-mean concentrations for selected properties and constituents using linear-regression procedures developed using data from the Nationwide Urban Runoff Program (NURP). Storm-water samples were analyzed for as many as 186 properties and constituents. The constituents measured include total-recoverable metals, vola-tile-organic compounds, acid-base/neutral organic compounds, and pesticides. Storm runoff sampled had large concentrations of chemical oxygen demand and 5-day biochemical oxygen demand. Chemical oxygen demand ranged from 100 to 830 milligrams per liter, and 5.-day biochemical oxygen demand ranged from 14 to 260 milligrams per liter. Total-organic carbon concentrations ranged from 18 to 240 milligrams per liter. The total-recoverable metals lead and zinc had the largest concentrations of the total-recoverable metals analyzed. Concentrations of lead ranged from 23 to 350 micrograms per liter, and concentrations of zinc ranged from 110 to 1,400 micrograms per liter. The data for 30 storms representing rainfall runoff from 5 drainage basins were used to develop single-storm local-regression models. The response variables, storm-runoff loads, volume, and event-mean concentrations were modeled using explanatory variables for climatic, physical, and land-use characteristics. The r2 for models that use ordinary least-squares regression ranged from 0.57 to 0.86 for storm-runoff loads and volume and from 0.25 to 0.63 for storm-runoff event-mean concentrations. Except for cadmium, standard errors of estimate ranged from 43 to 115 percent for storm- runoff loads and volume and from 35 to 66 percent for storm-runoff event-mean concentrations. Eleven of the 30 concentrations collected during rainfall runoff for total-recoverable cadmium were censored (less than) concentrations. Ordinary least-squares regression should not be used with censored data; however, censored data can be included with uncensored data using tobit regression. Standard errors of estimate for storm-runoff load and event-mean concentration for total-recoverable cadmium, computed using tobit regression, are 247 and 171 percent. Estimates from single-storm regional-regression models, developed from the Nationwide Urban Runoff Program data base, were compared with observed storm-runoff loads, volume, and event-mean concentrations determined from samples collected in the study area. Single-storm regional-regression models tended to overestimate storm-runoff loads, volume, and event-mean con-centrations. Therefore, single-storm local- and regional-regression models were combined using model-adjustment procedures to take advantage of the strengths of both models while minimizing the deficiencies of each model. Procedures were used to develop single-stormregression equations that were adjusted using local data and estimates from single-storm regional-regression equations. Single-storm regression models developed using model- adjustment proce
NASA Astrophysics Data System (ADS)
Robati, Masoud
This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.
The Effect of Airborne Contaminants on Fuel Cell Performance and Durability
DOE Office of Scientific and Technical Information (OSTI.GOV)
St-Pierre, Jean; Pasaogullari, Ugur; Cheng, Tommy
The impact of contaminants on fuel cell performance was examined to document air filter specifications (prevention) and devise recovery procedures (maintenance) that are effective at the system level. Eight previously undocumented airborne contaminants were selected for detailed studies and characterization data was used to identify operating conditions that intensifying contamination effects. The use of many and complementary electrochemical, chemical and physical characterization methods and the derivation of several mathematical models supported the formulation of contamination mechanisms and the development of recovery procedures. The complexity of these contamination mechanisms suggests a shift to prevention and generic maintenance measures. Only two ofmore » the selected contaminants led to cell voltage losses after injection was interrupted. Proposed recovery procedures for calcium ions, a component of road de-icers, dessicants, fertilizers and soil conditioners, were either ineffective or partly effective, whereas for bromomethane, a fumigant, the cell voltage was recovered to its initial value before contamination by manipulating and sequencing operating conditions. However, implementation for a fuel cell stack and system remains to be demonstrated. Contamination mechanisms also led to the identification of membrane durability stressors. All 8 selected contaminants promote the formation of hydrogen peroxide, a known agent that can produce radicals that attack the ionomer and membrane molecular structure whereas the dehydrating effect of calcium ions on the ionomer and membrane increases their brittleness and favors the creation of pinholes under mechanical stresses. Data related to acetylene, acetonitrile and calcium ions are emphasized in the report.« less
O'Boyle, Noel M; Palmer, David S; Nigsch, Florian; Mitchell, John BO
2008-01-01
Background We present a novel feature selection algorithm, Winnowing Artificial Ant Colony (WAAC), that performs simultaneous feature selection and model parameter optimisation for the development of predictive quantitative structure-property relationship (QSPR) models. The WAAC algorithm is an extension of the modified ant colony algorithm of Shen et al. (J Chem Inf Model 2005, 45: 1024–1029). We test the ability of the algorithm to develop a predictive partial least squares model for the Karthikeyan dataset (J Chem Inf Model 2005, 45: 581–590) of melting point values. We also test its ability to perform feature selection on a support vector machine model for the same dataset. Results Starting from an initial set of 203 descriptors, the WAAC algorithm selected a PLS model with 68 descriptors which has an RMSE on an external test set of 46.6°C and R2 of 0.51. The number of components chosen for the model was 49, which was close to optimal for this feature selection. The selected SVM model has 28 descriptors (cost of 5, ε of 0.21) and an RMSE of 45.1°C and R2 of 0.54. This model outperforms a kNN model (RMSE of 48.3°C, R2 of 0.47) for the same data and has similar performance to a Random Forest model (RMSE of 44.5°C, R2 of 0.55). However it is much less prone to bias at the extremes of the range of melting points as shown by the slope of the line through the residuals: -0.43 for WAAC/SVM, -0.53 for Random Forest. Conclusion With a careful choice of objective function, the WAAC algorithm can be used to optimise machine learning and regression models that suffer from overfitting. Where model parameters also need to be tuned, as is the case with support vector machine and partial least squares models, it can optimise these simultaneously. The moving probabilities used by the algorithm are easily interpreted in terms of the best and current models of the ants, and the winnowing procedure promotes the removal of irrelevant descriptors. PMID:18959785
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1975-01-01
This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.
Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...
2016-02-11
Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less