Sample records for selection criterion based

  1. Criterion learning in rule-based categorization: Simulation of neural mechanism and new data

    PubMed Central

    Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd

    2015-01-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349

  2. Criterion learning in rule-based categorization: simulation of neural mechanism and new data.

    PubMed

    Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd

    2015-04-01

    In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. A determinant-based criterion for working correlation structure selection in generalized estimating equations.

    PubMed

    Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S

    2016-05-20

    In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion

    PubMed Central

    Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.

    2016-01-01

    Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391

  5. Model selection for multi-component frailty models.

    PubMed

    Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert

    2007-11-20

    Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.

  6. The cross-validated AUC for MCP-logistic regression with high-dimensional data.

    PubMed

    Jiang, Dingfeng; Huang, Jian; Zhang, Ying

    2013-10-01

    We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.

  7. Economic weights for genetic improvement of lactation persistency and milk yield.

    PubMed

    Togashi, K; Lin, C Y

    2009-06-01

    This study aimed to establish a criterion for measuring the relative weight of lactation persistency (the ratio of yield at 280 d in milk to peak yield) in restricted selection index for the improvement of net merit comprising 3-parity total yield and total lactation persistency. The restricted selection index was compared with selection based on first-lactation total milk yield (I(1)), the first-two-lactation total yield (I(2)), and first-three-lactation total yield (I(3)). Results show that genetic response in net merit due to selection on restricted selection index could be greater than, equal to, or less than that due to the unrestricted index depending upon the relative weight of lactation persistency and the restriction level imposed. When the relative weight of total lactation persistency is equal to the criterion, the restricted selection index is equal to the selection method compared (I(1), I(2), or I(3)). The restricted selection index yielded a greater response when the relative weight of total lactation persistency was above the criterion, but a lower response when it was below the criterion. The criterion varied depending upon the restriction level (c) imposed and the selection criteria compared. A curvilinear relationship (concave curve) exists between the criterion and the restricted level. The criterion increases as the restriction level deviates in either direction from 1.5. Without prior information of the economic weight of lactation persistency, the imposition of the restriction level of 1.5 on lactation persistency would maximize change in net merit. The procedure presented allows for simultaneous modification of multi-parity lactation curves.

  8. Mutual information criterion for feature selection with application to classification of breast microcalcifications

    NASA Astrophysics Data System (ADS)

    Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit

    2016-03-01

    Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.

  9. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    NASA Astrophysics Data System (ADS)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  10. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  11. Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.

  12. Physical employment standards for U.K. fire and rescue service personnel.

    PubMed

    Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L

    2016-01-01

    Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Technical issues affecting the implementation of US Environmental Protection Agency's proposed fish tissue-based aquatic criterion for selenium.

    PubMed

    Lemly, A Dennis; Skorupa, Joseph P

    2007-10-01

    The US Environmental Protection Agency is developing a national water quality criterion for selenium that is based on concentrations of the element in fish tissue. Although this approach offers advantages over the current water-based regulations, it also presents new challenges with respect to implementation. A comprehensive protocol that answers the "what, where, and when" is essential with the new tissue-based approach in order to ensure proper acquisition of data that apply to the criterion. Dischargers will need to understand selenium transport, cycling, and bioaccumulation in order to effectively monitor for the criterion and, if necessary, develop site-specific standards. This paper discusses 11 key issues that affect the implementation of a tissue-based criterion, ranging from the selection of fish species to the importance of hydrological units in the sampling design. It also outlines a strategy that incorporates both water column and tissue-based approaches. A national generic safety-net water criterion could be combined with a fish tissue-based criterion for site-specific implementation. For the majority of waters nationwide, National Pollution Discharge Elimination System permitting and other activities associated with the Clean Water Act could continue without the increased expense of sampling and interpreting biological materials. Dischargers would do biotic sampling intermittently (not a routine monitoring burden) on fish tissue relative to the fish tissue criterion. Only when the fish tissue criterion is exceeded would a full site-specific analysis including development of intermedia translation factors be necessary.

  14. Genomic selection in a commercial winter wheat population.

    PubMed

    He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong

    2016-03-01

    Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.

  15. Confident difference criterion: a new Bayesian differentially expressed gene selection algorithm with applications.

    PubMed

    Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S

    2015-08-07

    Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.

  16. Predictability of Seasonal Rainfall over the Greater Horn of Africa

    NASA Astrophysics Data System (ADS)

    Ngaina, J. N.

    2016-12-01

    The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions

  17. Vortex Advisory System : Volume 1. Effectiveness for Selected Airports.

    DOT National Transportation Integrated Search

    1980-05-01

    The Vortex Advisory System (VAS) is based on wind criterion--when the wind near the runway end is outside of the criterion, all interarrival Instrument Flight Rules (IFR) aircraft separations can be set at 3 nautical miles. Five years of wind data ha...

  18. The System of Objectified Judgement Analysis (SOJA). A tool in rational drug selection for formulary inclusion.

    PubMed

    Janknegt, R; Steenhoek, A

    1997-04-01

    Rational drug selection for formulary purposes is important. Besides rational selection criteria, other factors play a role in drug decision making, such as emotional, personal financial and even unconscious criteria. It is agreed that these factors should be excluded as much as possible in the decision making process. A model for drug decision making for formulary purposes is described, the System of Objectified Judgement Analysis (SOJA). In the SOJA method, selection criteria for a given group of drugs are prospectively defined and the extent to which each drug fulfils the requirements for each criterion is determined. Each criterion is given a relative weight, i.e. the more important a given selection criterion is considered, the higher the relative weight. Both the relative scores for each drug per selection criterion and the relative weight of each criterion are determined by a panel of experts in this field. The following selection criteria are applied in all SOJA scores: clinical efficacy, incidence and severity of adverse effects, dosage frequency, drug interactions, acquisition cost, documentation, pharmacokinetics and pharmaceutical aspects. Besides these criteria, group specific criteria are also used, such as development of resistance when a SOJA score was made for antimicrobial agents. The relative weight that is assigned to each criterion will always be a subject of discussion. Therefore, interactive software programs for use on a personal computer have been developed, in which the user of the system may enter their own personal relative weight to each selection criterion and make their own personal SOJA score. The main advantage of the SOJA method is that all nonrational selection criteria are excluded and that drug decision making is based solely on rational criteria. The use of the interactive SOJA discs makes the decision process fully transparent as it becomes clear on which criteria and weighting decisions are based. We have seen that the use of this method for drug decision making greatly aids the discussion in the formulary committee, as discussion becomes much more concrete. The SOJA method is time dependent. Documentation on most products is still increasing and the score for this criterion will therefore change continuously. New products are introduced and prices are also subject to change. To overcome the time-dependence of the SOJA method, regular updates of interactive software programs are being made, in which changes in acquisition cost, documentation or a different weighting of criteria are included, as well as newly introduced products. The possibility of changing the official acquisition cost into the actual purchasing costs for the hospital in question provides a tailor-made interactive program.

  19. Drinking Water Quality Criterion - Based site Selection of Aquifer Storage and Recovery Scheme in Chou-Shui River Alluvial Fan

    NASA Astrophysics Data System (ADS)

    Huang, H. E.; Liang, C. P.; Jang, C. S.; Chen, J. S.

    2015-12-01

    Land subsidence due to groundwater exploitation is an urgent environmental problem in Choushui river alluvial fan in Taiwan. Aquifer storage and recovery (ASR), where excess surface water is injected into subsurface aquifers for later recovery, is one promising strategy for managing surplus water and may overcome water shortages. The performance of an ASR scheme is generally evaluated in terms of recovery efficiency, which is defined as percentage of water injected in to a system in an ASR site that fulfills the targeted water quality criterion. Site selection of an ASR scheme typically faces great challenges, due to the spatial variability of groundwater quality and hydrogeological condition. This study proposes a novel method for the ASR site selection based on drinking quality criterion. Simplified groundwater flow and contaminant transport model spatial distributions of the recovery efficiency with the help of the groundwater quality, hydrological condition, ASR operation. The results of this study may provide government administrator for establishing reliable ASR scheme.

  20. 76 FR 21985 - Notice of Final Priorities, Requirements, Definitions, and Selection Criteria

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-19

    ... only after a research base has been established to support the use of the assessments for such purposes..., research-based assessment practices. Discussion: We agree that the selection criteria should address the... selection criterion, which addresses methods of scoring, to allow for self-scoring of student performance on...

  1. New Stopping Criteria for Segmenting DNA Sequences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Wentian

    2001-06-18

    We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less

  2. 29 CFR 1630.10 - Qualification standards, tests, and other selection criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... business necessity. (b) Qualification standards and tests related to uncorrected vision. Notwithstanding..., or other selection criteria based on an individual's uncorrected vision unless the standard, test, or... application of a qualification standard, test, or other criterion based on uncorrected vision need not be a...

  3. Precoded spatial multiplexing MIMO system with spatial component interleaver.

    PubMed

    Gao, Xiang; Wu, Zhanji

    In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.

  4. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  5. To select or to wait? The importance of criterion setting in debates of competitive lexical selection.

    PubMed

    Nozari, Nazbanou; Hepner, Christopher R

    2018-06-05

    Competitive accounts of lexical selection propose that the activation of competitors slows down the selection of the target. Non-competitive accounts, on the other hand, posit that target response latencies are independent of the activation of competing items. In this paper, we propose a signal detection framework for lexical selection and show how a flexible selection criterion affects claims of competitive selection. Specifically, we review evidence from neurotypical and brain-damaged speakers and demonstrate that task goals and the state of the production system determine whether a competitive or a non-competitive selection profile arises. We end by arguing that there is conclusive evidence for a flexible criterion in lexical selection, and that integrating criterion shifts into models of language production is critical for evaluating theoretical claims regarding (non-)competitive selection.

  6. Support Vector Feature Selection for Early Detection of Anastomosis Leakage From Bag-of-Words in Electronic Health Records.

    PubMed

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-09-01

    The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.

  7. Model Selection and Psychological Theory: A Discussion of the Differences between the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC)

    ERIC Educational Resources Information Center

    Vrieze, Scott I.

    2012-01-01

    This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…

  8. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    PubMed

    Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H

    2017-01-01

    In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  9. Fuzzy approaches to supplier selection problem

    NASA Astrophysics Data System (ADS)

    Ozkok, Beyza Ahlatcioglu; Kocken, Hale Gonce

    2013-09-01

    Supplier selection problem is a multi-criteria decision making problem which includes both qualitative and quantitative factors. In the selection process many criteria may conflict with each other, therefore decision-making process becomes complicated. In this study, we handled the supplier selection problem under uncertainty. In this context; we used minimum criterion, arithmetic mean criterion, regret criterion, optimistic criterion, geometric mean and harmonic mean. The membership functions created with the help of the characteristics of used criteria, and we tried to provide consistent supplier selection decisions by using these memberships for evaluating alternative suppliers. During the analysis, no need to use expert opinion is a strong aspect of the methodology used in the decision-making.

  10. Can we use genetic and genomic approaches to identify candidate animals for targeted selective treatment.

    PubMed

    Laurenson, Yan C S M; Kyriazakis, Ilias; Bishop, Stephen C

    2013-10-18

    Estimated breeding values (EBV) for faecal egg count (FEC) and genetic markers for host resistance to nematodes may be used to identify resistant animals for selective breeding programmes. Similarly, targeted selective treatment (TST) requires the ability to identify the animals that will benefit most from anthelmintic treatment. A mathematical model was used to combine the concepts and evaluate the potential of using genetic-based methods to identify animals for a TST regime. EBVs obtained by genomic prediction were predicted to be the best determinant criterion for TST in terms of the impact on average empty body weight and average FEC, whereas pedigree-based EBVs for FEC were predicted to be marginally worse than using phenotypic FEC as a determinant criterion. Whilst each method has financial implications, if the identification of host resistance is incorporated into a wider genomic selection indices or selective breeding programmes, then genetic or genomic information may be plausibly included in TST regimes. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. An Empirical Model Building Criterion Based on Prediction with Applications in Parametric Cost Estimation.

    DTIC Science & Technology

    1980-08-01

    varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of

  12. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  13. Ellipticity angle of electromagnetic signals and its use for non-energetic detection optimal by the Neumann-Pearson criterion

    NASA Astrophysics Data System (ADS)

    Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.

    2012-08-01

    An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.

  14. Dynamic Portfolio Strategy Using Clustering Approach

    PubMed Central

    Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market. PMID:28129333

  15. Dynamic Portfolio Strategy Using Clustering Approach.

    PubMed

    Ren, Fei; Lu, Ya-Nan; Li, Sai-Ping; Jiang, Xiong-Fei; Zhong, Li-Xin; Qiu, Tian

    2017-01-01

    The problem of portfolio optimization is one of the most important issues in asset management. We here propose a new dynamic portfolio strategy based on the time-varying structures of MST networks in Chinese stock markets, where the market condition is further considered when using the optimal portfolios for investment. A portfolio strategy comprises two stages: First, select the portfolios by choosing central and peripheral stocks in the selection horizon using five topological parameters, namely degree, betweenness centrality, distance on degree criterion, distance on correlation criterion and distance on distance criterion. Second, use the portfolios for investment in the investment horizon. The optimal portfolio is chosen by comparing central and peripheral portfolios under different combinations of market conditions in the selection and investment horizons. Market conditions in our paper are identified by the ratios of the number of trading days with rising index to the total number of trading days, or the sum of the amplitudes of the trading days with rising index to the sum of the amplitudes of the total trading days. We find that central portfolios outperform peripheral portfolios when the market is under a drawup condition, or when the market is stable or drawup in the selection horizon and is under a stable condition in the investment horizon. We also find that peripheral portfolios gain more than central portfolios when the market is stable in the selection horizon and is drawdown in the investment horizon. Empirical tests are carried out based on the optimal portfolio strategy. Among all possible optimal portfolio strategies based on different parameters to select portfolios and different criteria to identify market conditions, 65% of our optimal portfolio strategies outperform the random strategy for the Shanghai A-Share market while the proportion is 70% for the Shenzhen A-Share market.

  16. A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs

    NASA Technical Reports Server (NTRS)

    Bloomfield, Harvey S.

    1991-01-01

    The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.

  17. DISCOVERING BRIGHT QUASARS AT INTERMEDIATE REDSHIFTS BASED ON OPTICAL/NEAR-INFRARED COLORS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xue-Bing; Zuo, Wenwen; Yang, Jinyi

    2013-10-01

    The identification of quasars at intermediate redshifts (2.2 < z < 3.5) has been inefficient in most previous quasar surveys since the optical colors of quasars are similar to those of stars. The near-IR K-band excess technique has been suggested to overcome this difficulty. Our recent study also proposed to use optical/near-IR colors for selecting z < 4 quasars. To verify the effectiveness of this method, we selected a list of 105 unidentified bright targets with i ≤ 18.5 from the quasar candidates of SDSS DR6 with both SDSS ugriz optical and UKIDSS YJHK near-IR photometric data, which satisfy ourmore » proposed Y – K/g – z criterion and have photometric redshifts between 2.2 and 3.5 estimated from the nine-band SDSS-UKIDSS data. We observed 43 targets with the BFOSC instrument on the 2.16 m optical telescope at Xinglong station of the National Astronomical Observatory of China in the spring of 2012. We spectroscopically identified 36 targets as quasars with redshifts between 2.1 and 3.4. The high success rate of discovering these quasars in the SDSS spectroscopic surveyed area further demonstrates the robustness of both the Y – K/g – z selection criterion and the photometric redshift estimation technique. We also used the above criterion to investigate the possible stellar contamination rate among the quasar candidates of SDSS DR6, and found that the rate is much higher when selecting 3 < z < 3.5 quasar candidates than when selecting lower redshift candidates (z < 2.2). The significant improvement in the photometric redshift estimation when using the nine-band SDSS-UKIDSS data over the five-band SDSS data is demonstrated and a catalog of 7727 unidentified quasar candidates in SDSS DR6 selected with optical/near-IR colors and having photometric redshifts between 2.2 and 3.5 is provided. We also tested the Y – K/g – z selection criterion with the recently released SDSS-III/DR9 quasar catalog and found that 96.2% of 17,999 DR9 quasars with UKIDSS Y- and K-band data satisfy our criterion. With some available samples of red quasars and type II quasars, we find that 88% and 96.5% of these objects can be selected by the Y – K/g – z criterion, respectively, which supports our claim that using the Y – K/g – z criterion efficiently selects both unobscured and obscured quasars. We discuss the implications of our results on the ongoing and upcoming large optical and near-IR sky surveys.« less

  18. An orbital localization criterion based on the theory of "fuzzy" atoms.

    PubMed

    Alcoba, Diego R; Lain, Luis; Torre, Alicia; Bochicchio, Roberto C

    2006-04-15

    This work proposes a new procedure for localizing molecular and natural orbitals. The localization criterion presented here is based on the partitioning of the overlap matrix into atomic contributions within the theory of "fuzzy" atoms. Our approach has several advantages over other schemes: it is computationally inexpensive, preserves the sigma/pi-separability in planar systems and provides a straightforward interpretation of the resulting orbitals in terms of their localization indices and atomic occupancies. The corresponding algorithm has been implemented and its efficiency tested on selected molecular systems. (c) 2006 Wiley Periodicals, Inc.

  19. Properties of DRGs, LBGs, and BzK Galaxies in the GOODS South Field

    NASA Astrophysics Data System (ADS)

    Grazian, A.; Salimbeni, S.; Pentericci, L.; Fontana, A.; Santini, P.; Giallongo, E.; de Santis, C.; Gallozzi, S.; Nonino, M.; Cristiani, S.; Vanzella, E.

    2007-12-01

    We use the GOODS-MUSIC catalog with multi-wavelength coverage extending from the U band to the Spitzer 8 μm band, and spectroscopic or accurate photometric redshifts to select samples of BM/BX/LBGs, DRGs, and BzK galaxies. We discuss the overlap and the limitations of these selection criteria, which can be overcome with a criterion based on physical parameters (age and star formation timescale). We show that the BzK-PE criterion is not optimal for selecting early type galaxies at the faint end. We also find that LBGs and DRGs contribute almost equally to the global Stellar Mass Density (SMD) at z≥ 2 and in general that star forming galaxies form a substantial fraction of the universal SMD.

  20. Kernel learning at the first level of inference.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Entropic criterion for model selection

    NASA Astrophysics Data System (ADS)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  2. When is hub gene selection better than standard meta-analysis?

    PubMed

    Langfelder, Peter; Mischel, Paul S; Horvath, Steve

    2013-01-01

    Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to gene expression data and presents novel R functions for carrying out consensus network analysis, network based screening, and meta analysis.

  3. Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process

    NASA Astrophysics Data System (ADS)

    Yan, Wei; Chang, Yuwen

    2016-12-01

    Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.

  4. Data consistency criterion for selecting parameters for k-space-based reconstruction in parallel imaging.

    PubMed

    Nana, Roger; Hu, Xiaoping

    2010-01-01

    k-space-based reconstruction in parallel imaging depends on the reconstruction kernel setting, including its support. An optimal choice of the kernel depends on the calibration data, coil geometry and signal-to-noise ratio, as well as the criterion used. In this work, data consistency, imposed by the shift invariance requirement of the kernel, is introduced as a goodness measure of k-space-based reconstruction in parallel imaging and demonstrated. Data consistency error (DCE) is calculated as the sum of squared difference between the acquired signals and their estimates obtained based on the interpolation of the estimated missing data. A resemblance between DCE and the mean square error in the reconstructed image was found, demonstrating DCE's potential as a metric for comparing or choosing reconstructions. When used for selecting the kernel support for generalized autocalibrating partially parallel acquisition (GRAPPA) reconstruction and the set of frames for calibration as well as the kernel support in temporal GRAPPA reconstruction, DCE led to improved images over existing methods. Data consistency error is efficient to evaluate, robust for selecting reconstruction parameters and suitable for characterizing and optimizing k-space-based reconstruction in parallel imaging.

  5. An Integrated model for Product Quality Development—A case study on Quality functions deployment and AHP based approach

    NASA Astrophysics Data System (ADS)

    Maitra, Subrata; Banerjee, Debamalya

    2010-10-01

    Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.

  6. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  7. Discriminative Projection Selection Based Face Image Hashing

    NASA Astrophysics Data System (ADS)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  8. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    PubMed Central

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  9. Selecting Items for Criterion-Referenced Tests.

    ERIC Educational Resources Information Center

    Mellenbergh, Gideon J.; van der Linden, Wim J.

    1982-01-01

    Three item selection methods for criterion-referenced tests are examined: the classical theory of item difficulty and item-test correlation; the latent trait theory of item characteristic curves; and a decision-theoretic approach for optimal item selection. Item contribution to the standardized expected utility of mastery testing is discussed. (CM)

  10. The optimal code searching method with an improved criterion of coded exposure for remote sensing image restoration

    NASA Astrophysics Data System (ADS)

    He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2015-03-01

    Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.

  11. A reduced-order model for compressible flows with buffeting condition using higher order dynamic mode decomposition with a mode selection criterion

    NASA Astrophysics Data System (ADS)

    Kou, Jiaqing; Le Clainche, Soledad; Zhang, Weiwei

    2018-01-01

    This study proposes an improvement in the performance of reduced-order models (ROMs) based on dynamic mode decomposition to model the flow dynamics of the attractor from a transient solution. By combining higher order dynamic mode decomposition (HODMD) with an efficient mode selection criterion, the HODMD with criterion (HODMDc) ROM is able to identify dominant flow patterns with high accuracy. This helps us to develop a more parsimonious ROM structure, allowing better predictions of the attractor dynamics. The method is tested in the solution of a NACA0012 airfoil buffeting in a transonic flow, and its good performance in both the reconstruction of the original solution and the prediction of the permanent dynamics is shown. In addition, the robustness of the method has been successfully tested using different types of parameters, indicating that the proposed ROM approach is a tool promising for using in both numerical simulations and experimental data.

  12. Decision support system of e-book provider selection for library using Simple Additive Weighting

    NASA Astrophysics Data System (ADS)

    Ciptayani, P. I.; Dewi, K. C.

    2018-01-01

    Each library has its own criteria and differences in the importance of each criterion in choosing an e-book provider for them. The large number of providers and the different importance levels of each criterion make the problem of determining the e-book provider to be complex and take a considerable time in decision making. The aim of this study was to implement Decision support system (DSS) to assist the library in selecting the best e-book provider based on their preferences. The way of DSS works is by comparing the importance of each criterion and the condition of each alternative decision. SAW is one of DSS method that is quite simple, fast and widely used. This study used 9 criteria and 18 provider to demonstrate how SAW work in this study. With the DSS, then the decision-making time can be shortened and the calculation results can be more accurate than manual calculations.

  13. Criterion-Referenced Testing in Foreign Language Teaching.

    ERIC Educational Resources Information Center

    Takala, Sauli

    A review of literature serves as the basis for a discussion of various aspects of criterion-referenced tests. The aspects discussed are: teaching and evaluation objectives, criterion- and norm-referenced measurement, stages in construction of criterion-referenced tests, construction and selection of items, test validity, and test reliability.…

  14. Selection of infectious medical waste disposal firms by using the analytic hierarchy process and sensitivity analysis.

    PubMed

    Hsu, Pi-Fang; Wu, Cheng-Ru; Li, Ya-Ting

    2008-01-01

    While Taiwanese hospitals dispose of large amounts of medical waste to ensure sanitation and personal hygiene, doing so inefficiently creates potential environmental hazards and increases operational expenses. However, hospitals lack objective criteria to select the most appropriate waste disposal firm and evaluate its performance, instead relying on their own subjective judgment and previous experiences. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select medical waste disposal firms based on the results of interviews with experts in the field, thus reducing overhead costs and enhancing medical waste management. An appropriate weight criterion based on AHP is derived to assess the effectiveness of medical waste disposal firms. The proposed AHP-based method offers a more efficient and precise means of selecting medical waste firms than subjective assessment methods do, thus reducing the potential risks for hospitals. Analysis results indicate that the medical sector selects the most appropriate infectious medical waste disposal firm based on the following rank: matching degree, contractor's qualifications, contractor's service capability, contractor's equipment and economic factors. By providing hospitals with an effective means of evaluating medical waste disposal firms, the proposed AHP method can reduce overhead costs and enable medical waste management to understand the market demand in the health sector. Moreover, performed through use of Expert Choice software, sensitivity analysis can survey the criterion weight of the degree of influence with an alternative hierarchy.

  15. 34 CFR 389.30 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 34 Education 2 2011-07-01 2010-07-01 true What additional selection criterion is used under this program? 389.30 Section 389.30 Education Regulations of the Offices of the Department of Education... CONTINUING EDUCATION PROGRAMS How Does the Secretary Make a Grant? § 389.30 What additional selection...

  16. 34 CFR 389.30 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 34 Education 2 2010-07-01 2010-07-01 false What additional selection criterion is used under this program? 389.30 Section 389.30 Education Regulations of the Offices of the Department of Education... CONTINUING EDUCATION PROGRAMS How Does the Secretary Make a Grant? § 389.30 What additional selection...

  17. Data mining in soft computing framework: a survey.

    PubMed

    Mitra, S; Pal, S K; Mitra, P

    2002-01-01

    The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included.

  18. Assessing the Validity of Automated Webcrawlers as Data Collection Tools to Investigate Online Child Sexual Exploitation.

    PubMed

    Westlake, Bryce; Bouchard, Martin; Frank, Richard

    2017-10-01

    The distribution of child sexual exploitation (CE) material has been aided by the growth of the Internet. The graphic nature and prevalence of the material has made researching and combating difficult. Although used to study online CE distribution, automated data collection tools (e.g., webcrawlers) have yet to be shown effective at targeting only relevant data. Using CE-related image and keyword criteria, we compare networks starting from CE websites to those from similar non-CE sexuality websites and dissimilar sports websites. Our results provide evidence that (a) webcrawlers have the potential to provide valid CE data, if the appropriate criterion is selected; (b) CE distribution is still heavily image-based suggesting images as an effective criterion; (c) CE-seeded networks are more hub-based and differ from non-CE-seeded networks on several website characteristics. Recommendations for improvements to reliable criteria selection are discussed.

  19. RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation.

  20. GPS baseline configuration design based on robustness analysis

    NASA Astrophysics Data System (ADS)

    Yetkin, M.; Berber, M.

    2012-11-01

    The robustness analysis results obtained from a Global Positioning System (GPS) network are dramatically influenced by the configurationof the observed baselines. The selection of optimal GPS baselines may allow for a cost effective survey campaign and a sufficiently robustnetwork. Furthermore, using the approach described in this paper, the required number of sessions, the baselines to be observed, and thesignificance levels for statistical testing and robustness analysis can be determined even before the GPS campaign starts. In this study, wepropose a robustness criterion for the optimal design of geodetic networks, and present a very simple and efficient algorithm based on thiscriterion for the selection of optimal GPS baselines. We also show the relationship between the number of sessions and the non-centralityparameter. Finally, a numerical example is given to verify the efficacy of the proposed approach.

  1. Identification of confounders in the assessment of the relationship between lead exposure and child development.

    PubMed

    Tong, I S; Lu, Y

    2001-01-01

    To explore the best approach to identify and adjust for confounders in epidemiologic practice. In the Port Pirie cohort study, the selection of covariates was based on both a priori and an empirical consideration. In an assessment of the relationship between exposure to environmental lead and child development, change-in-estimate (CE) and significance testing (ST) criteria were compared in identifying potential confounders. The Pearson correlation coefficients were used to evaluate the potential for collinearity between pairs of major quantitative covariates. In multivariate analyses, the effects of confounding factors were assessed with multiple linear regression models. The nature and number of covariates selected varied with different confounder selection criteria and different cutoffs. Four covariates (i.e., quality of home environment, socioeconomic status (SES), maternal intelligence, and parental smoking behaviour) met the conventional CE criterion (> or =10%), whereas 14 variables met the ST criterion (p < or = 0.25). However, the magnitude of the relationship between blood lead concentration and children's IQ differed slightly after adjustment for confounding, using either the CE (partial regression coefficient: -4.4; 95% confidence interval (CI): -0.5 to -8.3) or ST criterion (-4.3; 95% CI: -0.2 to -8.4). Identification and selection of confounding factors need to be viewed cautiously in epidemiologic studies. Either the CE (e.g., > or = 10%) or ST (e.g., p < or = 0.25) criterion may be implemented in identification of a potential confounder if a study sample is sufficiently large, and both the methods are subject to arbitrariness of selecting a cut-off point. In this study, the CE criterion (i.e., > or = 10%) appears to be more stringent than the ST method (i.e., p < or = 0.25) in the identification of confounders. However, the ST rule cannot be used to determine the trueness of confounding because it cannot reflect the causal relationship between the confounder and outcome. This study shows the complexities one can expect to encounter in the identification of and adjustment for confounders.

  2. Choice: 36 band feature selection software with applications to multispectral pattern recognition

    NASA Technical Reports Server (NTRS)

    Jones, W. C.

    1973-01-01

    Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.

  3. Selection of effective cocrystals former for dissolution rate improvement of active pharmaceutical ingredients based on lipoaffinity index.

    PubMed

    Cysewski, Piotr; Przybyłek, Maciej

    2017-09-30

    New theoretical screening procedure was proposed for appropriate selection of potential cocrystal formers possessing the ability of enhancing dissolution rates of drugs. The procedure relies on the training set comprising 102 positive and 17 negative cases of cocrystals found in the literature. Despite the fact that the only available data were of qualitative character, performed statistical analysis using binary classification allowed to formulate quantitative criterions. Among considered 3679 molecular descriptors the relative value of lipoaffinity index, expressed as the difference between values calculated for active compound and excipient, has been found as the most appropriate measure suited for discrimination of positive and negative cases. Assuming 5% precision, the applied classification criterion led to inclusion of 70% positive cases in the final prediction. Since lipoaffinity index is a molecular descriptor computed using only 2D information about a chemical structure, its estimation is straightforward and computationally inexpensive. The inclusion of an additional criterion quantifying the cocrystallization probability leads to the following conjunction criterions H mix <-0.18 and ΔLA>3.61, allowing for identification of dissolution rate enhancers. The screening procedure was applied for finding the most promising coformers of such drugs as Iloperidone, Ritonavir, Carbamazepine and Enthenzamide. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Effects of task-irrelevant grouping on visual selection in partial report.

    PubMed

    Lunau, Rasmus; Habekost, Thomas

    2017-07-01

    Perceptual grouping modulates performance in attention tasks such as partial report and change detection. Specifically, grouping of search items according to a task-relevant feature improves the efficiency of visual selection. However, the role of task-irrelevant feature grouping is not clearly understood. In the present study, we investigated whether grouping of targets by a task-irrelevant feature influences performance in a partial-report task. In this task, participants must report as many target letters as possible from a briefly presented circular display. The crucial manipulation concerned the color of the elements in these trials. In the sorted-color condition, the color of the display elements was arranged according to the selection criterion, and in the unsorted-color condition, colors were randomly assigned. The distractor cost was inferred by subtracting performance in partial-report trials from performance in a control condition that had no distractors in the display. Across five experiments, we manipulated trial order, selection criterion, and exposure duration, and found that attentional selectivity was improved in sorted-color trials when the exposure duration was 200 ms and the selection criterion was luminance. This effect was accompanied by impaired selectivity in unsorted-color trials. Overall, the results suggest that the benefit of task-irrelevant color grouping of targets is contingent on the processing locus of the selection criterion.

  5. Continuous-time safety-first portfolio selection with jump-diffusion processes

    NASA Astrophysics Data System (ADS)

    Yan, Wei

    2012-04-01

    This article is concerned with continuous-time portfolio selection based on a safety-first criterion under discontinuous price processes (jump-diffusion processes). The solution of the corresponding Hamilton-Jacobi-Bellman equation of the problem is demonstrated. The analytical solutions are presented when there does not exist any riskless asset. Moreover, the problem is also discussed while there exists one riskless asset.

  6. Portrayal of Life Form in Selected Biographies for Children Eight to Twelve Years of Age.

    ERIC Educational Resources Information Center

    Koch, Shirley Lois

    This study describes and analyzes, in a critical literary manner, selected biographies for children eight to twelve years of age. Biographies of Jane Addams, Cesar Chavez, Mohandas Gandhi, Toyohiko Kagawa, Martin Luther King, Jr., and Albert Schweitzer are viewed from the perspective of a literary criterion based on the principles of design to…

  7. South Carolina Elementary Teachers' Perceptions of Principals' Transformational Leadership in Academically Recognized and Other, High Poverty Schools

    ERIC Educational Resources Information Center

    Lewis, Julian Carlton

    2012-01-01

    This study investigated selected elementary school teachers' perceptions of principals' leadership. Ten South Carolina schools were selected based on the criterion of 50% or higher poverty index. Five schools included the feature of recognition by the state for academic success for one year or more over the 2003-2006 timeframe. One hundred three…

  8. Model selection for the North American Breeding Bird Survey: A comparison of methods

    USGS Publications Warehouse

    Link, William; Sauer, John; Niven, Daniel

    2017-01-01

    The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.

  9. Methodology of functionality selection for water management software and examples of its application.

    PubMed

    Vasilyev, K N

    2013-01-01

    When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.

  10. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  11. Time-frequency analysis-based time-windowing algorithm for the inverse synthetic aperture radar imaging of ships

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Zhang, Xi; Sun, Weifeng; Dai, Yongshou; Wan, Yong

    2018-01-01

    An algorithm based on time-frequency analysis is proposed to select an imaging time window for the inverse synthetic aperture radar imaging of ships. An appropriate range bin is selected to perform the time-frequency analysis after radial motion compensation. The selected range bin is that with the maximum mean amplitude among the range bins whose echoes are confirmed to be contributed by a dominant scatter. The criterion for judging whether the echoes of a range bin are contributed by a dominant scatter is key to the proposed algorithm and is therefore described in detail. When the first range bin that satisfies the judgment criterion is found, a sequence composed of the frequencies that have the largest amplitudes in every moment's time-frequency spectrum corresponding to this range bin is employed to calculate the length and the center moment of the optimal imaging time window. Experiments performed with simulation data and real data show the effectiveness of the proposed algorithm, and comparisons between the proposed algorithm and the image contrast-based algorithm (ICBA) are provided. Similar image contrast and lower entropy are acquired using the proposed algorithm as compared with those values when using the ICBA.

  12. 45 CFR 1151.33 - Employment criteria.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS NONDISCRIMINATION ON THE BASIS OF HANDICAP Discrimination... or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by the...

  13. 45 CFR 1151.33 - Employment criteria.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS NONDISCRIMINATION ON THE BASIS OF HANDICAP Discrimination... or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by the...

  14. 45 CFR 1151.33 - Employment criteria.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... HUMANITIES NATIONAL ENDOWMENT FOR THE ARTS NONDISCRIMINATION ON THE BASIS OF HANDICAP Discrimination... or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by the...

  15. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  16. The Anti-Resonance Criterion in Selecting Pick Systems for Fully Operational Cutting Machinery Used in Mining

    NASA Astrophysics Data System (ADS)

    Cheluszka, Piotr

    2017-12-01

    This article discusses the issue of selecting a pick system for cutting mining machinery, concerning the reduction of vibrations in the cutting system, particularly in a load-carrying structure at work. Numerical analysis was performed on a telescopic roadheader boom equipped with transverse heads. A frequency range of the boom's free vibrations with a set structure and dynamic properties were determined based on a dynamic model. The main components excited by boom vibrations, generated through the process of cutting rock, were identified. This was closely associated with the stereometry of the cutting heads. The impact on the pick system (the number of picks and their arrangement along the side of the cutting head) was determined by the intensity of the external boom load elements, especially in resonance zones. In terms of the anti-resonance criterion, an advantageous system of cutting head picks was determined as a result of the analysis undertaken. The correct selection of the pick system was ascertained based on a computer simulation of the dynamic loads and vibrations of a roadheader telescopic boom.

  17. Selection of infectious medical waste disposal firms by using the analytic hierarchy process and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, P.-F.; Wu, C.-R.; Li, Y.-T.

    2008-07-01

    While Taiwanese hospitals dispose of large amounts of medical waste to ensure sanitation and personal hygiene, doing so inefficiently creates potential environmental hazards and increases operational expenses. However, hospitals lack objective criteria to select the most appropriate waste disposal firm and evaluate its performance, instead relying on their own subjective judgment and previous experiences. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select medical waste disposal firms based on the results of interviews with experts in the field, thus reducing overhead costs and enhancing medical waste management. An appropriate weight criterion based on AHP is derivedmore » to assess the effectiveness of medical waste disposal firms. The proposed AHP-based method offers a more efficient and precise means of selecting medical waste firms than subjective assessment methods do, thus reducing the potential risks for hospitals. Analysis results indicate that the medical sector selects the most appropriate infectious medical waste disposal firm based on the following rank: matching degree, contractor's qualifications, contractor's service capability, contractor's equipment and economic factors. By providing hospitals with an effective means of evaluating medical waste disposal firms, the proposed AHP method can reduce overhead costs and enable medical waste management to understand the market demand in the health sector. Moreover, performed through use of Expert Choice software, sensitivity analysis can survey the criterion weight of the degree of influence with an alternative hierarchy.« less

  18. Criterion- Referenced Measurement; A Bibliography.

    ERIC Educational Resources Information Center

    Keller, Claudia Merkel

    This bibliography lists selected articles, research reports, monographs, books, and reference works related to criterion-referenced measurement. It is limited primarily to material which deals directly with criterion-referenced tests and testing procedures, and includes reports on computer-assisted test construction and the adaptation of…

  19. Wavelength selection in injection-driven Hele-Shaw flows: A maximum amplitude criterion

    NASA Astrophysics Data System (ADS)

    Dias, Eduardo; Miranda, Jose

    2013-11-01

    As in most interfacial flow problems, the standard theoretical procedure to establish wavelength selection in the viscous fingering instability is to maximize the linear growth rate. However, there are important discrepancies between previous theoretical predictions and existing experimental data. In this work we perform a linear stability analysis of the radial Hele-Shaw flow system that takes into account the combined action of viscous normal stresses and wetting effects. Most importantly, we introduce an alternative selection criterion for which the selected wavelength is determined by the maximum of the interfacial perturbation amplitude. The effectiveness of such a criterion is substantiated by the significantly improved agreement between theory and experiments. We thank CNPq (Brazilian Sponsor) for financial support.

  20. Interspecies quantitative structure-activity relationships (QSARs) for eco-toxicity screening of chemicals: the role of physicochemical properties.

    PubMed

    Furuhama, A; Hasunuma, K; Aoki, Y

    2015-01-01

    In addition to molecular structure profiles, descriptors based on physicochemical properties are useful for explaining the eco-toxicities of chemicals. In a previous study we reported that a criterion based on the difference between the partition coefficient (log POW) and distribution coefficient (log D) values of chemicals enabled us to identify aromatic amines and phenols for which interspecies relationships with strong correlations could be developed for fish-daphnid and algal-daphnid toxicities. The chemicals that met the log D-based criterion were expected to have similar toxicity mechanisms (related to membrane penetration). Here, we investigated the applicability of log D-based criteria to the eco-toxicity of other kinds of chemicals, including aliphatic compounds. At pH 10, use of a log POW - log D > 0 criterion and omission of outliers resulted in the selection of more than 100 chemicals whose acute fish toxicities or algal growth inhibition toxicities were almost equal to their acute daphnid toxicities. The advantage of log D-based criteria is that they allow for simple, rapid screening and prioritizing of chemicals. However, inorganic molecules and chemicals containing certain structural elements cannot be evaluated, because calculated log D values are unavailable.

  1. Trait-specific long-term consequences of genomic selection in beef cattle.

    PubMed

    de Rezende Neves, Haroldo Henrique; Carvalheiro, Roberto; de Queiroz, Sandra Aidar

    2018-02-01

    Simulation studies allow addressing consequences of selection schemes, helping to identify effective strategies to enable genetic gain and maintain genetic diversity. The aim of this study was to evaluate the long-term impact of genomic selection (GS) in genetic progress and genetic diversity of beef cattle. Forward-in-time simulation generated a population with pattern of linkage disequilibrium close to that previously reported for real beef cattle populations. Different scenarios of GS and traditional pedigree-based BLUP (PBLUP) selection were simulated for 15 generations, mimicking selection for female reproduction and meat quality. For GS scenarios, an alternative selection criterion was simulated (wGBLUP), intended to enhance long-term gains by attributing more weight to favorable alleles with low frequency. GS allowed genetic progress up to 40% greater than PBLUP, for female reproduction and meat quality. The alternative criterion wGBLUP did not increase long-term response, although allowed reducing inbreeding rates and loss of favorable alleles. The results suggest that GS outperforms PBLUP when the selected trait is under less polygenic background and that attributing more weight to low-frequency favorable alleles can reduce inbreeding rates and loss of favorable alleles in GS.

  2. 14 CFR 1251.202 - Employment criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... HANDICAP Employment Practices § 1251.202 Employment criteria. (a) A recipient may not make use of any employment test or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by...

  3. 14 CFR 1251.202 - Employment criteria.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... HANDICAP Employment Practices § 1251.202 Employment criteria. (a) A recipient may not make use of any employment test or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by...

  4. 14 CFR 1251.202 - Employment criteria.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... HANDICAP Employment Practices § 1251.202 Employment criteria. (a) A recipient may not make use of any employment test or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by...

  5. 14 CFR 1251.202 - Employment criteria.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... HANDICAP Employment Practices § 1251.202 Employment criteria. (a) A recipient may not make use of any employment test or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as used by...

  6. 14 CFR § 1251.202 - Employment criteria.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... OF HANDICAP Employment Practices § 1251.202 Employment criteria. (a) A recipient may not make use of any employment test or other selection criterion that screens out or tends to screen out handicapped persons or any class of handicapped persons unless: (1) The test score or other selection criterion, as...

  7. Human striatal activation during adjustment of the response criterion in visual word recognition.

    PubMed

    Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred

    2011-02-01

    Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. A new tracer‐density criterion for heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Illangasekare, Tissa H.; Hill, Mary C.; Rajaram, Harihar

    2001-01-01

    Tracer experiments provide information about aquifer material properties vital for accurate site characterization. Unfortunately, density‐induced sinking can distort tracer movement, leading to an inaccurate assessment of material properties. Yet existing criteria for selecting appropriate tracer concentrations are based on analysis of homogeneous media instead of media with heterogeneities typical of field sites. This work introduces a hydraulic‐gradient correction for heterogeneous media and applies it to a criterion previously used to indicate density‐induced instabilities in homogeneous media. The modified criterion was tested using a series of two‐dimensional heterogeneous intermediate‐scale tracer experiments and data from several detailed field tracer tests. The intermediate‐scale experimental facility (10.0×1.2×0.06 m) included both homogeneous and heterogeneous (σln k2 = 1.22) zones. The field tracer tests were less heterogeneous (0.24 < σln k2 < 0.37), but measurements were sufficient to detect density‐induced sinking. Evaluation of the modified criterion using the experiments and field tests demonstrates that the new criterion appears to account for the change in density‐induced sinking due to heterogeneity. The criterion demonstrates the importance of accounting for heterogeneity to predict density‐induced sinking and differences in the onset of density‐induced sinking in two‐ and three‐dimensional systems.

  9. Interface Pattern Selection Criterion for Cellular Structures in Directional Solidification

    NASA Technical Reports Server (NTRS)

    Trivedi, R.; Tewari, S. N.; Kurtze, D.

    1999-01-01

    The aim of this investigation is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. We shall first address scientific concepts that are crucial in the selection of interface patterns. Next, the results of ground-based experimental studies in the Al-4.0 wt % Cu system will be described. Both experimental studies and theoretical calculations will be presented to establish the need for microgravity experiments.

  10. Bayesian cross-validation for model evaluation and selection, with application to the North American Breeding Bird Survey

    USGS Publications Warehouse

    Link, William; Sauer, John R.

    2016-01-01

    The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.

  11. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  12. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    PubMed

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  13. Problems in Criterion-Referenced Measurement. CSE Monograph Series in Evaluation, 3.

    ERIC Educational Resources Information Center

    Harris, Chester W., Ed.; And Others

    Six essays on technical measurement problems in criterion referenced tests and four essays by psychometricians proposing solutions are presented: (1) "Criterion-Referenced Measurement" and Other Such Terms, by Marvin C. Alkin which is an overview of the first six papers; (2) Selecting Objectives and Generating Test Items for Objectives-Based…

  14. Model weights and the foundations of multimodel inference

    USGS Publications Warehouse

    Link, W.A.; Barker, R.J.

    2006-01-01

    Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.

  15. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  16. A new failure mechanism in thin film by collaborative fracture and delamination: Interacting duos of cracks

    NASA Astrophysics Data System (ADS)

    Marthelot, Joël; Bico, José; Melo, Francisco; Roman, Benoît

    2015-11-01

    When a thin film moderately adherent to a substrate is subjected to residual stress, the cooperation between fracture and delamination leads to unusual fracture patterns, such as spirals, alleys of crescents and various types of strips, all characterized by a robust characteristic length scale. We focus on the propagation of a duo of cracks: two fractures in the film connected by a delamination front and progressively detaching a strip. We show experimentally that the system selects an equilibrium width on the order of 25 times the thickness of the coating and independent of both fracture and adhesion energies. We investigate numerically the selection of the width and the condition for propagation by considering Griffith's criterion and the principle of local symmetry. In addition, we propose a simplified model based on the criterion of maximum of energy release rate, which provides insights of the physical mechanisms leading to these regular patterns, and predicts the effect of material properties on the selected width of the detaching strip.

  17. The Development of a Criterion Instrument for Counselor Selection.

    ERIC Educational Resources Information Center

    Remer, Rory; Sease, William

    A measure of potential performance as a counselor is needed as an adjunct to the information presently employed in selection decisions. This article deals with one possible method of development of such a potential performance criterion and the steps taken, to date, in the attempt to validate it. It includes: the overall effectiveness of the…

  18. A data driven partial ambiguity resolution: Two step success rate criterion, and its simulation demonstration

    NASA Astrophysics Data System (ADS)

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-12-01

    Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.

  19. Storage Optimization of Educational System Data

    ERIC Educational Resources Information Center

    Boja, Catalin

    2006-01-01

    There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…

  20. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  1. Older Adults' Online Dating Profiles and Successful Aging.

    PubMed

    Wada, Mineko; Mortenson, William Bennett; Hurd Clarke, Laura

    2016-12-01

    This study examined how relevant Rowe and Kahn's three criteria of successful aging were to older adults' self-portrayals in online dating profiles: low probability of disease and disability, high functioning, and active life engagement. In this cross-sectional study, 320 online dating profiles of older adults were randomly selected and coded based on the criteria. Logistic regression analyses determined whether age, gender, and race/ethnicity predicted self-presentation. Few profiles were indicative of successful aging due to the low prevalence of the first two criteria; the third criterion, however, was identified in many profiles. Native Americans were significantly less likely than other ethnic groups to highlight the first two criteria. Younger age predicted presenting the first criterion. Women's presentation of the third criterion remained significantly high with age. The findings suggest that the criteria may be unimportant to older adults when seeking partners, or they may reflect the exclusivity of this construct.

  2. Information Fusion for High Level Situation Assessment and Prediction

    DTIC Science & Technology

    2007-03-01

    procedure includes deciding a sensor set that achieves the optimal trade -off between its cost and benefit, activating the identified sensors, integrating...and effective decision can be made by dynamic inference based on selecting a subset of sensors with the optimal trade -off between their cost and...first step is achieved by designing a sensor selection criterion that represents the trade -off between the sensor benefit and sensor cost. This is then

  3. Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part III. Investigation of European Standard Methods

    PubMed Central

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A.; Kashon, Michael L.; Harper, Martin

    2015-01-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60–73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min−1 for the medium- and 4.4, 10, and 11.2 l min−1 for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP = 8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP = 18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN = 0.014 + 0.375 × PPNIOSH (adjusted R2 = 0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods. The 25% PP criterion recommended by Lee et al. (2014a), average value derived from repetitive measurements, corresponds to 11% PPEN. The 10% pass/fail criterion in the EN Standards is not based on extensive laboratory evaluation and would unreasonably exclude at least one pump (i.e. AirChek XR5000 in this study) and, therefore, the more accurate criterion of average 11% from repetitive measurements should be substituted. This study suggests that users can measure PP using either a real-world sampling train or a resistor setup and obtain equivalent findings by applying the model herein derived. The findings of this study will be delivered to the consensus committees to be considered when those standards, including the EN 1232-1997, EN 12919-1999, and ISO 13137-2013, are revised. PMID:25053700

  4. Analysis of Criteria Influencing Contractor Selection Using TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Alptekin, Orkun; Alptekin, Nesrin

    2017-10-01

    Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.

  5. A non-destructive selection criterion for fibre content in jute : II. Regression approach.

    PubMed

    Arunachalam, V; Iyer, R D

    1974-01-01

    An experiment with ten populations of jute, comprising varieties and mutants of the two species Corchorus olitorius and C.capsularis was conducted at two different locations with the object of evolving an effective criterion for selecting superior single plants for fibre yield. At Delhi, variation existed only between varieties as a group and mutants as a group, while at Pusa variation also existed among the mutant populations of C. capsularis.A multiple regression approach was used to find the optimum combination of characters for prediction of fibre yield. A process of successive elimination of characters based on the coefficient of determination provided by individual regression equations was employed to arrive at the optimal set of characters for predicting fibre yield. It was found that plant height, basal and mid-diameters and basal and mid-dry fibre weights would provide such an optimal set.

  6. AMD-stability in the presence of first-order mean motion resonances

    NASA Astrophysics Data System (ADS)

    Petit, A. C.; Laskar, J.; Boué, G.

    2017-11-01

    The angular momentum deficit (AMD)-stability criterion allows to discriminate between a priori stable planetary systems and systems for which the stability is not granted and needs further investigations. AMD-stability is based on the conservation of the AMD in the averaged system at all orders of averaging. While the AMD criterion is rigorous, the conservation of the AMD is only granted in absence of mean-motion resonances (MMR). Here we extend the AMD-stability criterion to take into account mean-motion resonances, and more specifically the overlap of first-order MMR. If the MMR islands overlap, the system will experience generalized chaos leading to instability. The Hamiltonian of two massive planets on coplanar quasi-circular orbits can be reduced to an integrable one degree of freedom problem for period ratios close to a first-order MMR. We use the reduced Hamiltonian to derive a new overlap criterion for first-order MMR. This stability criterion unifies the previous criteria proposed in the literature and admits the criteria obtained for initially circular and eccentric orbits as limit cases. We then improve the definition of AMD-stability to take into account the short term chaos generated by MMR overlap. We analyze the outcome of this improved definition of AMD-stability on selected multi-planet systems from the Extrasolar Planets Encyclopædia.

  7. Applications of Decision Theory to Test-Based Decision Making. Project Psychometric Aspects of Item Banking No. 23. Research Report 87-9.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    The use of Bayesian decision theory to solve problems in test-based decision making is discussed. Four basic decision problems are distinguished: (1) selection; (2) mastery; (3) placement; and (4) classification, the situation where each treatment has its own criterion. Each type of decision can be identified as a specific configuration of one or…

  8. Controlling the Growth of Future LEO Debris Populations with Active Debris Removal

    NASA Technical Reports Server (NTRS)

    Liou, J.-C.; Johnson, N. L.; Hill, N. M.

    2008-01-01

    Active debris removal (ADR) was suggested as a potential means to remediate the low Earth orbit (LEO) debris environment as early as the 1980s. The reasons ADR has not become practical are due to its technical difficulties and the high cost associated with the approach. However, as the LEO debris populations continue to increase, ADR may be the only option to preserve the near-Earth environment for future generations. An initial study was completed in 2007 to demonstrate that a simple ADR target selection criterion could be developed to reduce the future debris population growth. The present paper summarizes a comprehensive study based on more realistic simulation scenarios, including fragments generated from the 2007 Fengyun-1C event, mitigation measures, and other target selection options. The simulations were based on the NASA long-term orbital debris projection model, LEGEND. A scenario, where at the end of mission lifetimes, spacecraft and upper stages were moved to 25-year decay orbits, was adopted as the baseline environment for comparison. Different annual removal rates and different ADR target selection criteria were tested, and the resulting 200-year future environment projections were compared with the baseline scenario. Results of this parametric study indicate that (1) an effective removal strategy can be developed based on the mass and collision probability of each object as the selection criterion, and (2) the LEO environment can be stabilized in the next 200 years with an ADR removal rate of five objects per year.

  9. [What determines the participation in stepwise occupational reintegration on behalf of the German pension insurance? Results of the "SOR cohort study"].

    PubMed

    Bürger, W; Streibelt, M

    2015-02-01

    Stepwise Occupational Reintegration (SOR) measures are of growing importance for the German statutory pension insurance. There is moderate evidence that patients with a poor prognosis in terms of a successful return to work, profit most from SOR measures. However, it is not clear to what extend these information are utilized when recommending SOR to a patient. A questionnaire was sent to 40406 persons (up to 59 years old, excluding rehabilitation after hospital stay) before admission to a medical rehabilitation service. The survey data were matched with data from the discharge report and information on the participation in a SOR measure. Initially, a single criterion was defined which describes the need of SOR measures. This criterion is based on 3 different items: patients with at least 12 weeks sickness absence, (a) a SIBAR score>7 and/or (b) a perceived need of SOR.The main aspect of our analyses was to describe the association between the SOR need-criterion and the participation in SOR measures as well as between the predictors of SOR participation when fulfilling the SOR need-criterion. The analyses were based on a multiple logistic regression model. For 16408 patients full data were available. The formal prerequisites for SOR were given for 33% of the sample, out of which 32% received a SOR after rehabilitation and 43% fulfilled the SOR needs criterion. A negative relationship between these 2 categories was observed (phi=-0.08, p<0.01). For patients that fulfilled the need-criterion the probability for participating in SOR decreased by 22% (RR=0.78). The probability of SOR participation increased with a decreasing SIBAR score (OR=0.56) and in patients who showed more confidence in being able be return to work. Participation in SOR measures cannot be predicted by the empirically defined SOR need-criterion: the probability even decreased when fulfilling the criterion. Furthermore, the results of a multivariate analysis show a positive selection of the patients who participate in SOR measures. Our results point strongly to the need of an indication guideline for physicians in rehabilitation centres. Further research addressing the success of SOR measures have to show whether the information used in this case can serve as a base for such a guideline. © Georg Thieme Verlag KG Stuttgart · New York.

  10. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  11. Criteria for clinical audit of the quality of hospital-based obstetric care in developing countries.

    PubMed Central

    Graham, W.; Wagaarachchi, P.; Penney, G.; McCaw-Binns, A.; Antwi, K. Y.; Hall, M. H.

    2000-01-01

    Improving the quality of obstetric care is an urgent priority in developing countries, where maternal mortality remains high. The feasibility of criterion-based clinical audit of the assessment and management of five major obstetric complications is being studied in Ghana and Jamaica. In order to establish case definitions and clinical audit criteria, a systematic review of the literature was followed by three expert panel meetings. A modified nominal group technique was used to develop consensus among experts on a final set of case definitions and criteria. Five main obstetric complications were selected and definitions were agreed. The literature review led to the identification of 67 criteria, and the panel meetings resulted in the modification and approval of 37 of these for the next stage of audit. Criterion-based audit, which has been devised and tested primarily in industrialized countries, can be adapted and applied where resources are poorer. The selection of audit criteria for such settings requires local expert opinion to be considered in addition to research evidence, so as to ensure that the criteria are realistic in relation to conditions in the field. Practical methods for achieving this are described in the present paper. PMID:10859855

  12. Optimizing phonon space in the phonon-coupling model

    NASA Astrophysics Data System (ADS)

    Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.

    2017-08-01

    We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.

  13. Selection and use of TLDS for high precision NERVA shielding measurements

    NASA Technical Reports Server (NTRS)

    Woodsum, H. C.

    1972-01-01

    An experimental evaluation of thermoluminescent dosimeters was performed in order to select high precision dosimeters for a study whose purpose is to measure gamma streaming through the coolant passages of a simulated flight type internal NERVA reactor shield. Based on this study, the CaF2 chip TLDs are the most reproducible dosimeters with reproducibility generally within a few percent, but none of the TLDs tested met the reproducibility criterion of plus or minus 2%.

  14. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    NASA Astrophysics Data System (ADS)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  15. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Information-theoretic model selection for optimal prediction of stochastic dynamical systems from data

    NASA Astrophysics Data System (ADS)

    Darmon, David

    2018-03-01

    In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.

  17. The Counselor Evaluation Rating Scale: A Valid Criterion of Counselor Effectiveness?

    ERIC Educational Resources Information Center

    Jones, Lawrence K.

    1974-01-01

    The validity of recent recommendations regarding the use of certain factors of the 16 Personality Factor Questionnaire (16PF) to select persons for counselor training programs, where the CERS was the criterion measure, is challenged. (Author)

  18. An Interoperability Consideration in Selecting Domain Parameters for Elliptic Curve Cryptography

    NASA Technical Reports Server (NTRS)

    Ivancic, Will (Technical Monitor); Eddy, Wesley M.

    2005-01-01

    Elliptic curve cryptography (ECC) will be an important technology for electronic privacy and authentication in the near future. There are many published specifications for elliptic curve cryptosystems, most of which contain detailed descriptions of the process for the selection of domain parameters. Selecting strong domain parameters ensures that the cryptosystem is robust to attacks. Due to a limitation in several published algorithms for doubling points on elliptic curves, some ECC implementations may produce incorrect, inconsistent, and incompatible results if domain parameters are not carefully chosen under a criterion that we describe. Few documents specify the addition or doubling of points in such a manner as to avoid this problematic situation. The safety criterion we present is not listed in any ECC specification we are aware of, although several other guidelines for domain selection are discussed in the literature. We provide a simple example of how a set of domain parameters not meeting this criterion can produce catastrophic results, and outline a simple means of testing curve parameters for interoperable safety over doubling.

  19. A Novel Non-Invasive Selection Criterion for the Preservation of Primitive Dutch Konik Horses

    PubMed Central

    May-Davis, Sharon; Shorter, Kathleen; Vermeulen, Zefanja; Butler, Raquel; Koekkoek, Marianne

    2018-01-01

    The Dutch Konik is valued from a genetic conservation perspective and also for its role in preservation of natural landscapes. The primary management objective for the captive breeding of this primitive horse is to maintain its genetic purity, whilst also maintaining the nature reserves on which they graze. Breeding selection has traditionally been based on phenotypic characteristics consistent with the breed description, and the selection of animals for removal from the breeding program is problematic at times due to high uniformity within the breed, particularly in height at the wither, colour (mouse to grey dun) and presence of primitive markings. With the objective of identifying an additional non-invasive selection criterion with potential uniqueness to the Dutch Konik, this study investigates the anatomic parameters of the distal equine limb, with a specific focus on the relative lengths of the individual splint bones. Post-mortem dissections performed on distal limbs of Dutch Konik (n = 47) and modern domesticated horses (n = 120) revealed significant differences in relation to the length and symmetry of the 2nd and 4th Metacarpals and Metatarsals. Distal limb characteristics with apparent uniqueness to the Dutch Konik are described which could be an important tool in the selection and preservation of the breed. PMID:29389896

  20. A Novel Non-Invasive Selection Criterion for the Preservation of Primitive Dutch Konik Horses.

    PubMed

    May-Davis, Sharon; Brown, Wendy Y; Shorter, Kathleen; Vermeulen, Zefanja; Butler, Raquel; Koekkoek, Marianne

    2018-02-01

    The Dutch Konik is valued from a genetic conservation perspective and also for its role in preservation of natural landscapes. The primary management objective for the captive breeding of this primitive horse is to maintain its genetic purity, whilst also maintaining the nature reserves on which they graze. Breeding selection has traditionally been based on phenotypic characteristics consistent with the breed description, and the selection of animals for removal from the breeding program is problematic at times due to high uniformity within the breed, particularly in height at the wither, colour (mouse to grey dun) and presence of primitive markings. With the objective of identifying an additional non-invasive selection criterion with potential uniqueness to the Dutch Konik, this study investigates the anatomic parameters of the distal equine limb, with a specific focus on the relative lengths of the individual splint bones. Post-mortem dissections performed on distal limbs of Dutch Konik ( n = 47) and modern domesticated horses ( n = 120) revealed significant differences in relation to the length and symmetry of the 2nd and 4th Metacarpals and Metatarsals. Distal limb characteristics with apparent uniqueness to the Dutch Konik are described which could be an important tool in the selection and preservation of the breed.

  1. Establishing Arbitrarily Applicable Relations of Same and Opposite with the Relational Completion Procedure: Selection-Based Feedback

    ERIC Educational Resources Information Center

    Dymond, Simon; Ng, Tsz Ching; Whelan, Robert

    2013-01-01

    Research suggests that the relational completion procedure (RCP) is effective for studying derived relations of same and opposite. Previously, procedural parameters, such as the presence or absence of a confirmatory response requirement, were found to have a facilitative effect on the number of training trials to criterion and overall arbitrary…

  2. 34 CFR 642.22 - How does the Secretary evaluate prior experience?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Secretary may add from 1 to 15 points to the point score obtained on the basis of the selection criteria in § 642.21, based on the applicant's success in meeting the administrative requirements and programmatic objectives of paragraph (e) of this section. (2) The maximum possible score for each criterion is indicated...

  3. Measurement versus prediction in the construction of patient-reported outcome questionnaires: can we have our cake and eat it?

    PubMed

    Smits, Niels; van der Ark, L Andries; Conijn, Judith M

    2017-11-02

    Two important goals when using questionnaires are (a) measurement: the questionnaire is constructed to assign numerical values that accurately represent the test taker's attribute, and (b) prediction: the questionnaire is constructed to give an accurate forecast of an external criterion. Construction methods aimed at measurement prescribe that items should be reliable. In practice, this leads to questionnaires with high inter-item correlations. By contrast, construction methods aimed at prediction typically prescribe that items have a high correlation with the criterion and low inter-item correlations. The latter approach has often been said to produce a paradox concerning the relation between reliability and validity [1-3], because it is often assumed that good measurement is a prerequisite of good prediction. To answer four questions: (1) Why are measurement-based methods suboptimal for questionnaires that are used for prediction? (2) How should one construct a questionnaire that is used for prediction? (3) Do questionnaire-construction methods that optimize measurement and prediction lead to the selection of different items in the questionnaire? (4) Is it possible to construct a questionnaire that can be used for both measurement and prediction? An empirical data set consisting of scores of 242 respondents on questionnaire items measuring mental health is used to select items by means of two methods: a method that optimizes the predictive value of the scale (i.e., forecast a clinical diagnosis), and a method that optimizes the reliability of the scale. We show that for the two scales different sets of items are selected and that a scale constructed to meet the one goal does not show optimal performance with reference to the other goal. The answers are as follows: (1) Because measurement-based methods tend to maximize inter-item correlations by which predictive validity reduces. (2) Through selecting items that correlate highly with the criterion and lowly with the remaining items. (3) Yes, these methods may lead to different item selections. (4) For a single questionnaire: Yes, but it is problematic because reliability cannot be estimated accurately. For a test battery: Yes, but it is very costly. Implications for the construction of patient-reported outcome questionnaires are discussed.

  4. Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs

    PubMed Central

    Pérez, Ana G.; León, Lorenzo; Sanz, Carlos; de la Rosa, Raúl

    2018-01-01

    Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO) although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties (“Picual” and “Arbequina”) used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the simplified procedure of the proposed methodology for fruit phenol evaluation seems to be convenient for breeding programs aiming at obtaining new cultivars with improved phenolic profile. PMID:29535752

  5. Fruit Phenolic Profiling: A New Selection Criterion in Olive Breeding Programs.

    PubMed

    Pérez, Ana G; León, Lorenzo; Sanz, Carlos; de la Rosa, Raúl

    2018-01-01

    Olive growing is mainly based on traditional varieties selected by the growers across the centuries. The few attempts so far reported to obtain new varieties by systematic breeding have been mainly focused on improving the olive adaptation to different growing systems, the productivity and the oil content. However, the improvement of oil quality has rarely been considered as selection criterion and only in the latter stages of the breeding programs. Due to their health promoting and organoleptic properties, phenolic compounds are one of the most important quality markers for Virgin olive oil (VOO) although they are not commonly used as quality traits in olive breeding programs. This is mainly due to the difficulties for evaluating oil phenolic composition in large number of samples and the limited knowledge on the genetic and environmental factors that may influence phenolic composition. In the present work, we propose a high throughput methodology to include the phenolic composition as a selection criterion in olive breeding programs. For that purpose, the phenolic profile has been determined in fruits and oils of several breeding selections and two varieties ("Picual" and "Arbequina") used as control. The effect of three different environments, typical for olive growing in Andalusia, Southern Spain, was also evaluated. A high genetic effect was observed on both fruit and oil phenolic profile. In particular, the breeding selection UCI2-68 showed an optimum phenolic profile, which sums up to a good agronomic performance previously reported. A high correlation was found between fruit and oil total phenolic content as well as some individual phenols from the two different matrices. The environmental effect on phenolic compounds was also significant in both fruit and oil, although the low genotype × environment interaction allowed similar ranking of genotypes on the different environments. In summary, the high genotypic variance and the simplified procedure of the proposed methodology for fruit phenol evaluation seems to be convenient for breeding programs aiming at obtaining new cultivars with improved phenolic profile.

  6. Strength-based criterion shifts in recognition memory.

    PubMed

    Singer, Murray

    2009-10-01

    In manipulations of stimulus strength between lists, a more lenient signal detection criterion is more frequently applied to a weak than to a strong stimulus class. However, with randomly intermixed weak and strong test probes, such a criterion shift often does not result. A procedure that has yielded delay-based within-list criterion shifts was applied to strength manipulations in recognition memory for categorized word lists. When participants made semantic ratings about each stimulus word, strength-based criterion shifts emerged regardless of whether words from pairs of categories were studied in separate blocks (Experiment 1) or in intermixed blocks (Experiment 2). In Experiment 3, the criterion shift persisted under the semantic-rating study task, but not under rote memorization. These findings suggest that continually adjusting the recognition decision criterion is cognitively feasible. They provide a technique for manipulating the criterion shift, and they identify competing theoretical accounts of these effects.

  7. Formation of integrated structural units using the systematic and integrated method when implementing high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Abramov, Ivan

    2018-03-01

    Development of design documentation for a future construction project gives rise to a number of issues with the main one being selection of manpower for structural units of the project's overall implementation system. Well planned and competently staffed integrated structural construction units will help achieve a high level of reliability and labor productivity and avoid negative (extraordinary) situations during the construction period eventually ensuring improved project performance. Research priorities include the development of theoretical recommendations for enhancing reliability of a structural unit staffed as an integrated construction crew. The author focuses on identification of destabilizing factors affecting formation of an integrated construction crew; assessment of these destabilizing factors; based on the developed mathematical model, highlighting the impact of these factors on the integration criterion with subsequent identification of an efficiency and reliability criterion for the structural unit in general. The purpose of this article is to develop theoretical recommendations and scientific and methodological provisions of an organizational and technological nature in order to identify a reliability criterion for a structural unit based on manpower integration and productivity criteria. With this purpose in mind, complex scientific tasks have been defined requiring special research, development of corresponding provisions and recommendations based on the system analysis findings presented herein.

  8. Evaluation of pump pulsation in respirable size-selective sampling: Part III. Investigation of European standard methods.

    PubMed

    Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin

    2014-10-01

    Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods. The 25% PP criterion recommended by Lee et al. (2014a), average value derived from repetitive measurements, corresponds to 11% PPEN. The 10% pass/fail criterion in the EN Standards is not based on extensive laboratory evaluation and would unreasonably exclude at least one pump (i.e. AirChek XR5000 in this study) and, therefore, the more accurate criterion of average 11% from repetitive measurements should be substituted. This study suggests that users can measure PP using either a real-world sampling train or a resistor setup and obtain equivalent findings by applying the model herein derived. The findings of this study will be delivered to the consensus committees to be considered when those standards, including the EN 1232-1997, EN 12919-1999, and ISO 13137-2013, are revised. Published by Oxford University Press on behalf of the British Occupational Hygiene Society 2014.

  9. An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion

    NASA Astrophysics Data System (ADS)

    Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo

    2018-05-01

    The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.

  10. Validation of Veterans Affairs Electronic Medical Record Smoking Data Among Iraq- and Afghanistan-Era Veterans.

    PubMed

    Calhoun, Patrick S; Wilson, Sarah M; Hertzberg, Jeffrey S; Kirby, Angela C; McDonald, Scott D; Dennis, Paul A; Bastian, Lori A; Dedert, Eric A; Beckham, Jean C

    2017-11-01

    Research using the Veterans Health Administration (VA) electronic medical records (EMR) has been limited by a lack of reliable smoking data. To evaluate the validity of using VA EMR "Health Factors" data to determine smoking status among veterans with recent military service. Sensitivity, specificity, area under the receiver-operating curve (AUC), and kappa statistics were used to evaluate concordance between VA EMR smoking status and criterion smoking status. Veterans (N = 2025) with service during the wars in Iraq/Afghanistan who participated in the VA Mid-Atlantic Post-Deployment Mental Health (PDMH) Study. Criterion smoking status was based on self-report during a confidential study visit. VA EMR smoking status was measured by coding health factors data entries (populated during automated clinical reminders) in three ways: based on the most common health factor, the most recent health factor, and the health factor within 12 months of the criterion smoking status data collection date. Concordance with PDMH smoking status (current, former, never) was highest when determined by the most commonly observed VA EMR health factor (κ = 0.69) and was not significantly impacted by psychiatric status. Agreement was higher when smoking status was dichotomized: current vs. not current (κ = 0.73; sensitivity = 0.84; specificity = 0.91; AUC = 0.87); ever vs. never (κ = 0.75; sensitivity = 0.85; specificity = 0.90; AUC = 0.87). There were substantial missing Health Factors data when restricting analyses to a 12-month period from the criterion smoking status date. Current smokers had significantly more Health Factors entries compared to never or former smokers. The use of computerized tobacco screening data to determine smoking status is valid and feasible. Results indicating that smokers have significantly more health factors entries than non-smokers suggest that caution is warranted when using the EMR to select cases for cohort studies as the risk for selection bias appears high.

  11. Multi atlas based segmentation: Should we prefer the best atlas group over the group of best atlases?

    PubMed

    Zaffino, Paolo; Ciardo, Delia; Raudaschl, Patrik; Fritscher, Karl; Ricotti, Rosalinda; Alterio, Daniela; Marvaso, Giulia; Fodor, Cristiana; Baroni, Guido; Amato, Francesco; Orecchia, Roberto; Jereczek-Fossa, Barbara Alicja; Sharp, Gregory C; Spadea, Maria Francesca

    2018-05-22

    Multi Atlas Based Segmentation (MABS) uses a database of atlas images, and an atlas selection process is used to choose an atlas subset for registration and voting. In the current state of the art, atlases are chosen according to a similarity criterion between the target subject and each atlas in the database. In this paper, we propose a new concept for atlas selection that relies on selecting the best performing group of atlases rather than the group of highest scoring individual atlases. Experiments were performed using CT images of 50 patients, with contours of brainstem and parotid glands. The dataset was randomly split in 2 groups: 20 volumes were used as an atlas database and 30 served as target subjects for testing. Classic oracle group selection, where atlases are chosen by the highest Dice Similarity Coefficient (DSC) with the target, was performed. This was compared to oracle Group selection, where all the combinations of atlas subgroups were considered and scored by computing DSC with the target subject. Subsequently, Convolutional Neural Networks (CNNs) were designed to predict the best group of atlases. The results were compared also with the selection strategy based on Normalized Mutual Information (NMI). Oracle group was proved to be significantly better that classic oracle selection (p<10-5). Atlas group selection led to a median±interquartile DSC of 0.740±0.084, 0.718±0.086 and 0.670±0.097 for brainstem and left/right parotid glands respectively, outperforming NMI selection 0.676±0.113, 0.632±0.104 and 0.606±0.118 (p<0.001) as well as classic oracle selection. The implemented methodology is a proof of principle that selecting the atlases by considering the performance of the entire group of atlases instead of each single atlas leads to higher segmentation accuracy, being even better then current oracle strategy. This finding opens a new discussion about the most appropriate atlas selection criterion for MABS. © 2018 Institute of Physics and Engineering in Medicine.

  12. Investigation into Hydraulic Gear Pump Efficiencies during the First Few Hours of the Pumps’ Lives and a Comparative Study of Accelerated Life Test Methods on Hydraulic Fluid Power Gear Pumps. Parts 1 and 2.

    DTIC Science & Technology

    1979-11-12

    Interi THE FIRST FEW HOURS OF THEIR LIVES AND A COMPARATIV 3 Ep. 77 - 29 A STUDY OF ACCELERATED LIFE TEST METHODS ON HYDRAULIC 6 PEFORINOORG...Hydrau- ics and Pneumatics raqazine Designers Guide to Fluid Power Products. The results of this survey were later analyzed and served as the basis in...selected. The selection criterion is based on formulas which use instrument design features, calibration $7) data and accuracy needs. Once selected, the

  13. Long-term outcomes of endoscopic submucosal dissection versus surgery in early gastric cancer meeting expanded indication including undifferentiated-type tumors: a criteria-based analysis.

    PubMed

    Lee, Sunpyo; Choi, Kee Don; Han, Minkyu; Na, Hee Kyong; Ahn, Ji Yong; Jung, Kee Wook; Lee, Jeong Hoon; Kim, Do Hoon; Song, Ho June; Lee, Gin Hyug; Yook, Jeong-Hwan; Kim, Byung Sik; Jung, Hwoon-Yong

    2018-05-01

    Endoscopic submucosal dissection (ESD) for early gastric cancer (EGC) meeting the expanded indication is considered investigational. We aimed to compare long-term outcomes of ESD and surgery for EGC in the expanded indication based on each criterion. This study included 1823 consecutive EGC patients meeting expanded indication conditions and treated at a tertiary referral center: 916 and 907 patients underwent surgery or ESD, respectively. The expanded indication included four discrete criteria: (I) intramucosal differentiated tumor, without ulcers, size >2 cm; (II) intramucosal differentiated tumor, with ulcers, size ≤3 cm; (III) intramucosal undifferentiated tumor, without ulcers, size ≤2 cm; and (IV) submucosal invasion <500 μm (sm1), differentiated tumor, size ≤3 cm. We selected 522 patients in each group by propensity score matching and retrospectively evaluated each group. The primary outcome was overall survival (OS); the secondary outcomes were disease-specific survival (DSS), recurrence-free survival (RFS), and treatment-related complications. In all patients and subgroups meeting each criterion, OS and DSS were not significantly different between groups (OS and DSS, all patients: p = 0.354 and p = 0.930; criteria I: p = 0.558 and p = 0.688; criterion II: p = 1.000 and p = 1.000; criterion III: p = 0.750 and p = 0.799; and criterion IV: p = 0.599 and p = 0.871). RFS, in all patients and criterion I, was significantly shorter in the ESD group than in the surgery group (p < 0.001 and p < 0.003, respectively). The surgery group showed higher rates of late and severe treatment-related complications than the ESD group. ESD may be an alternative treatment option to surgery for EGCs meeting expanded indications, including undifferentiated-type tumors.

  14. A New Multiaxial High-Cycle Fatigue Criterion Based on the Critical Plane for Ductile and Brittle Materials

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei

    2015-02-01

    An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.

  15. Criterion for excipients screening in the development of nanoemulsion formulation of three anti-inflammatory drugs.

    PubMed

    Shakeel, Faiyaz

    2010-01-01

    The present study was undertaken for screening of different excipients in the development of nanoemulsion formulations of three anti-inflammatory drugs namely ketoprofen, celecoxib (CXB) and meloxicam. Based on solubility profiles of each drug in oil, Triacetin (ketoprofen and CXB) and Labrafil (meloxicam) were selected as the oil phase. Based on maximum solubilization potential of oil in different surfactants, Cremophor-EL (ketoprofen and CXB) and Tween-80 (meloxicam) were selected as surfactants. Based on maximum nanoemulsion region in the pseudoternary phase diagrams, Transcutol-HP was selected as cosurfactant for all three drugs. 1:1 (ketoprofen and CXB) and 2:1 (meloxicam) mass ratio of surfactant to cosurfactant was selected for selection of different nanoemulsions on the basis of maximum nanoemulsion region in the phase diagrams. All selected nanoemulsion formulations were found thermodynamically stable. Results of these studies showed that all excipients were properly optimized for the development of nanoemulsion formulation of ketoprofen, CXB and meloxicam.

  16. Combined Optimal Control System for excavator electric drive

    NASA Astrophysics Data System (ADS)

    Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.

    2018-03-01

    The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).

  17. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.

    PubMed

    Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.

  18. Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud

    PubMed Central

    Hassan, Shahzad; Khan, Gul Muhammad

    2017-01-01

    Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819

  19. Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective

    NASA Astrophysics Data System (ADS)

    Jamali, Tayeb; Jafari, G. R.

    2015-07-01

    We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.

  20. Volcano plots in analyzing differential expressions with mRNA microarrays.

    PubMed

    Li, Wentian

    2012-12-01

    A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.

  1. An ensemble of SVM classifiers based on gene pairs.

    PubMed

    Tong, Muchenxuan; Liu, Kun-Hong; Xu, Chungui; Ju, Wenbin

    2013-07-01

    In this paper, a genetic algorithm (GA) based ensemble support vector machine (SVM) classifier built on gene pairs (GA-ESP) is proposed. The SVMs (base classifiers of the ensemble system) are trained on different informative gene pairs. These gene pairs are selected by the top scoring pair (TSP) criterion. Each of these pairs projects the original microarray expression onto a 2-D space. Extensive permutation of gene pairs may reveal more useful information and potentially lead to an ensemble classifier with satisfactory accuracy and interpretability. GA is further applied to select an optimized combination of base classifiers. The effectiveness of the GA-ESP classifier is evaluated on both binary-class and multi-class datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Comparison of case note review methods for evaluating quality and safety in health care.

    PubMed

    Hutchinson, A; Coster, J E; Cooper, K L; McIntosh, A; Walters, S J; Bath, P A; Pearson, M; Young, T A; Rantell, K; Campbell, M J; Ratcliffe, J

    2010-02-01

    To determine which of two methods of case note review--holistic (implicit) and criterion-based (explicit)--provides the most useful and reliable information for quality and safety of care, and the level of agreement within and between groups of health-care professionals when they use the two methods to review the same record. To explore the process-outcome relationship between holistic and criterion-based quality-of-care measures and hospital-level outcome indicators. Case notes of patients at randomly selected hospitals in England. In the first part of the study, retrospective multiple reviews of 684 case notes were undertaken at nine acute hospitals using both holistic and criterion-based review methods. Quality-of-care measures included evidence-based review criteria and a quality-of-care rating scale. Textual commentary on the quality of care was provided as a component of holistic review. Review teams comprised combinations of: doctors (n = 16), specialist nurses (n = 10) and clinically trained audit staff (n = 3) and non-clinical audit staff (n = 9). In the second part of the study, process (quality and safety) of care data were collected from the case notes of 1565 people with either chronic obstructive pulmonary disease (COPD) or heart failure in 20 hospitals. Doctors collected criterion-based data from case notes and used implicit review methods to derive textual comments on the quality of care provided and score the care overall. Data were analysed for intrarater consistency, inter-rater reliability between pairs of staff using intraclass correlation coefficients (ICCs) and completeness of criterion data capture, and comparisons were made within and between staff groups and between review methods. To explore the process-outcome relationship, a range of publicly available health-care indicator data were used as proxy outcomes in a multilevel analysis. Overall, 1473 holistic and 1389 criterion-based reviews were undertaken in the first part of the study. When same staff-type reviewer pairs/groups reviewed the same record, holistic scale score inter-rater reliability was moderate within each of the three staff groups [intraclass correlation coefficient (ICC) 0.46-0.52], and inter-rater reliability for criterion-based scores was moderate to good (ICC 0.61-0.88). When different staff-type pairs/groups reviewed the same record, agreement between the reviewer pairs/groups was weak to moderate for overall care (ICC 0.24-0.43). Comparison of holistic review score and criterion-based score of case notes reviewed by doctors and by non-clinical audit staff showed a reasonable level of agreement (p-values for difference 0.406 and 0.223, respectively), although results from all three staff types showed no overall level of agreement (p-value for difference 0.057). Detailed qualitative analysis of the textual data indicated that the three staff types tended to provide different forms of commentary on quality of care, although there was some overlap between some groups. In the process-outcome study there generally were high criterion-based scores for all hospitals, whereas there was more interhospital variation between the holistic review overall scale scores. Textual commentary on the quality of care verified the holistic scale scores. Differences among hospitals with regard to the relationship between mortality and quality of care were not statistically significant. Using the holistic approach, the three groups of staff appeared to interpret the recorded care differently when they each reviewed the same record. When the same clinical record was reviewed by doctors and non-clinical audit staff, there was no significant difference between the assessments of quality of care generated by the two groups. All three staff groups performed reasonably well when using criterion-based review, although the quality and type of information provided by doctors was of greater value. Therefore, when measuring quality of care from case notes, consideration needs to be given to the method of review, the type of staff undertaking the review, and the methods of analysis available to the review team. Review can be enhanced using a combination of both criterion-based and structured holistic methods with textual commentary, and variation in quality of care can best be identified from a combination of holistic scale scores and textual data review.

  3. Cluster analysis based on dimensional information with applications to feature selection and classification

    NASA Technical Reports Server (NTRS)

    Eigen, D. J.; Fromm, F. R.; Northouse, R. A.

    1974-01-01

    A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.

  4. Analysis of augmented aircraft flying qualities through application of the Neal-Smith criterion

    NASA Technical Reports Server (NTRS)

    Bailey, R. E.; Smith, R. E.

    1981-01-01

    The Neal-Smith criterion is examined for possible applications in the evaluation of augmented fighter aircraft flying qualities. Longitudinal and lateral flying qualities are addressed. Based on the application of several longitudinal flying qualities data bases, revisions are proposed to the original criterion. Examples are given which show the revised criterion to be a good discriminator of pitch flying qualities. Initial results of lateral flying qualities evaluation through application of the Neal-Smith criterion are poor. Lateral aircraft configurations whose flying qualities are degraded by roll ratcheting effects map into the Level 1 region of the criterion. A third dimension of the criterion for flying qualities specification is evident. Additional criteria are proposed to incorporate this dimension into the criterion structure for flying qualities analysis.

  5. The Missing Middle in Validation Research

    ERIC Educational Resources Information Center

    Taylor, Erwin K.; Griess, Thomas

    1976-01-01

    In most selection validation research, only the upper and lower tails of the criterion distribution are used, often yielding misleading or incorrect results. Provides formulas and tables which enable the researcher to account more accurately for the distribution of criterion within the middle range of population. (Author/RW)

  6. INTERNET-BASED SELF-TAILORED DEPOSIT CONTRACTS TO PROMOTE SMOKING REDUCTION AND ABSTINENCE

    PubMed Central

    Jarvis, Brantley P.; Dallery, Jesse

    2018-01-01

    Deposit contracting may reduce costs and increase efficacy in contingency management interventions. We evaluated two Internet-based deposit contract arrangements for smoking. In Experiment 1, nine participants deposited self-selected amounts that could be earned back for meeting goals. During treatment, participants were reimbursed for breath samples with less than or equal to 6 parts per million carbon monoxide and met the criterion for 47% of samples compared to 1% during baseline. In Experiment 2, 10 participants’ deposits were matched up to $50. No samples met the criterion during baseline but 41.5% met it during treatment. The average deposit was $82 in Experiment 1 and $49 in Experiment 2. Participants rated the intervention favorably and sample submission rates were high. These experiments suggest that Internet-based self-tailored deposits are acceptable, feasible, and can promote brief reduction and abstinence in some smokers. Future research should investigate individual and intervention factors that affect long-term cessation and uptake of deposit contracts. PMID:28211949

  7. The Gideon Criterion: The Effects of Selection Criteria on Soldier Capabilities and Battle Results

    DTIC Science & Technology

    1982-01-01

    United States Army Recruiting Command RESEARCH MEMORANDUM 82-1 AD______ I I THE GIDEON CRITERION: THE EFFECTS OF SELECTION CRITERIA ON SOLDIER...and Evaluation Directorate Fort Sheridan, Illinois 60037 83 05 09 056 ii 1 DISCLAIMER NOTICE THIS DOCUMENT IS BEST QUALITY PRACTICABLE. THE COPY...FURNISHED TO DTIC CONTAINED A SIGNIFICANT NUMBER OF PAGES WHICH DO NOT REPRODUCE LEGIBLY. j1 ... 4 ’ t c " " .. THE GIDEON CR17RION’. THE EFFECTS OF

  8. INFO-RNA--a fast approach to inverse RNA folding.

    PubMed

    Busch, Anke; Backofen, Rolf

    2006-08-01

    The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.

  9. 76 FR 15961 - Funding Priorities and Selection Criterion; Disability and Rehabilitation Research Projects and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-22

    ... least one, but no more than two, site-specific research projects to test innovative approaches to... Criterion; Disability and Rehabilitation Research Projects and Spinal Cord Injury Model Systems Centers and Multi-Site Collaborative Research Projects AGENCY: Office of Special Education and Rehabilitative...

  10. Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches

    NASA Astrophysics Data System (ADS)

    Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.

    2018-02-01

    This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.

  11. The transformation of the tender evaluation process in public procurement in Poland

    NASA Astrophysics Data System (ADS)

    Plebankiewicz, E.; Kozik, R.

    2017-10-01

    Procedures regarding the evaluation of tenders have been changed since the public procurement law was enacted (it came into force in January 1, 1995). The contracting authority could apply both the criteria related to the qualities of the contractor and those related to the to the subject - matter of public contract. Two extensive amendments in 2001 and a government project introduced vital regulations and excluded the possibility of applying criteria related to the qualities of the contractor. Act of 29 January 2004 Public Procurement Law allowed to use price as the sole contract award criterion. The changes in the Law in 2014 restricted that possibility to the situation in which the subject matter of a contract is commonly available and has established quality standards. The Act of 22 June 2016 amending the Public Procurement Law Act and some other laws introduced the new criteria list and limited the importance of the price criterion in the certain situations. Instead of price, the cost can also be a criterion for tender evaluation. The cost criterion can be determined using life cycle costing. In the paper, based on contract notices of open tendering published in the Public Procurement Bulletin, the criteria of construction contract selection will be analysed. In particular the effectiveness of changes in the Procurement Law will be researched.

  12. Adaptive selection and validation of models of complex systems in the presence of uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell-Maupin, Kathryn; Oden, J. T.

    This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.

  13. Adaptive selection and validation of models of complex systems in the presence of uncertainty

    DOE PAGES

    Farrell-Maupin, Kathryn; Oden, J. T.

    2017-08-01

    This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.

  14. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  15. Program management aid for redundancy selection and operational guidelines

    NASA Technical Reports Server (NTRS)

    Hodge, P. W.; Davis, W. L.; Frumkin, B.

    1972-01-01

    Although this criterion was developed specifically for use on the shuttle program, it has application to many other multi-missions programs (i.e. aircraft or mechanisms). The methodology employed is directly applicable even if the tools (nomographs and equations) are for mission peculiar cases. The redundancy selection criterion was developed to insure that both the design and operational cost impacts (life cycle costs) were considered in the selection of the quantity of operational redundancy. These tools were developed as aids in expediting the decision process and not intended as the automatic decision maker. This approach to redundancy selection is unique in that it enables a pseudo systems analysis to be performed on an equipment basis without waiting for all designs to be hardened.

  16. Issues and Procedures in the Development of Criterion Referenced Tests.

    ERIC Educational Resources Information Center

    Klein, Stephen P.; Kosecoff, Jacqueline

    The basic steps and procedures in the development of criterion referenced tests (CRT), as well as the issues and problems associated with these activities are discussed. In the first section of the paper, the discussions focus upon the purpose and defining characteristics of CRTs, item construction and selection, improving item quality, content…

  17. Model selection criterion in survival analysis

    NASA Astrophysics Data System (ADS)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  18. Crystal clear transparent lipstick formulation based on solidified oils.

    PubMed

    De Clermont-Gallerande, H; Chavardes, V; Zastrow, L

    1999-12-01

    We have developed a lipstick, the stick of which looks totally transparent. The base, coloured or not, may contain high concentration of actives or fragrances. The present study examines the process of determination of oils and solidifying agents. The selecting criterion include visible spectroscopic measurements to quantify transparency of the formulated product. We have also validated the stick hardness through drop point and breakage measurements. After several investigations, we selected a mixture of oils and solidifying agents. The oil network obtained has been characterized through optical microscopy, transmission electronic microscopy, X-ray diffraction and differential scanning calorimetry. We can show that the final product we obtained is amorphous and its solidity can be explained by chemical bonds formation.

  19. A fast and efficient segmentation scheme for cell microscopic image.

    PubMed

    Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H

    2007-04-27

    Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.

  20. Systematic approach to cutoff frequency selection in continuous-wave electron paramagnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Hirata, Hiroshi; Itoh, Toshiharu; Hosokawa, Kouichi; Deng, Yuanmu; Susaki, Hitoshi

    2005-08-01

    This article describes a systematic method for determining the cutoff frequency of the low-pass window function that is used for deconvolution in two-dimensional continuous-wave electron paramagnetic resonance (EPR) imaging. An evaluation function for the criterion used to select the cutoff frequency is proposed, and is the product of the effective width of the point spread function for a localized point signal and the noise amplitude of a resultant EPR image. The present method was applied to EPR imaging for a phantom, and the result of cutoff frequency selection was compared with that based on a previously reported method for the same projection data set. The evaluation function has a global minimum point that gives the appropriate cutoff frequency. Images with reasonably good resolution and noise suppression can be obtained from projections with an automatically selected cutoff frequency based on the present method.

  1. A viscoplastic study of crack-tip deformation and crack growth in a nickel-based superalloy at elevated temperature

    NASA Astrophysics Data System (ADS)

    Zhao, L. G.; Tong, J.

    Viscoplastic crack-tip deformation behaviour in a nickel-based superalloy at elevated temperature has been studied for both stationary and growing cracks in a compact tension (CT) specimen using the finite element method. The material behaviour was described by a unified viscoplastic constitutive model with non-linear kinematic and isotropic hardening rules, and implemented in the finite element software ABAQUS via a user-defined material subroutine (UMAT). Finite element analyses for stationary cracks showed distinctive strain ratchetting behaviour near the crack tip at selected load ratios, leading to progressive accumulation of tensile strain normal to the crack-growth plane. Results also showed that low frequencies and superimposed hold periods at peak loads significantly enhanced strain accumulation at crack tip. Finite element simulation of crack growth was carried out under a constant Δ K-controlled loading condition, again ratchetting was observed ahead of the crack tip, similar to that for stationary cracks. A crack-growth criterion based on strain accumulation is proposed where a crack is assumed to grow when the accumulated strain ahead of the crack tip reaches a critical value over a characteristic distance. The criterion has been utilized in the prediction of crack-growth rates in a CT specimen at selected loading ranges, frequencies and dwell periods, and the predictions were compared with the experimental results.

  2. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  3. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    NASA Astrophysics Data System (ADS)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  4. Criterion-Referenced and Norm-Referenced Assessments: Compatibility and Complementarity

    ERIC Educational Resources Information Center

    Lok, Beatrice; McNaught, Carmel; Young, Kenneth

    2016-01-01

    The tension between criterion-referenced and norm-referenced assessment is examined in the context of curriculum planning and assessment in outcomes-based approaches to higher education. This paper argues the importance of a criterion-referenced assessment approach once an outcomes-based approach has been adopted. It further discusses the…

  5. Fatigue Assessment of Nickel-Titanium Peripheral Stents: Comparison of Multi-Axial Fatigue Models

    NASA Astrophysics Data System (ADS)

    Allegretti, Dario; Berti, Francesca; Migliavacca, Francesco; Pennati, Giancarlo; Petrini, Lorenza

    2018-03-01

    Peripheral Nickel-Titanium (NiTi) stents exploit super-elasticity to treat femoropopliteal artery atherosclerosis. The stent is subject to cyclic loads, which may lead to fatigue fracture and treatment failure. The complexity of the loading conditions and device geometry, coupled with the nonlinear material behavior, may induce multi-axial and non-proportional deformation. Finite element analysis can assess the fatigue risk, by comparing the device state of stress with the material fatigue limit. The most suitable fatigue model is not fully understood for NiTi devices, due to its complex thermo-mechanical behavior. This paper assesses the fatigue behavior of NiTi stents through computational models and experimental validation. Four different strain-based models are considered: the von Mises criterion and three critical plane models (Fatemi-Socie, Brown-Miller, and Smith-Watson-Topper models). Two stents, made of the same material with different cell geometries are manufactured, and their fatigue behavior is experimentally characterized. The comparison between experimental and numerical results highlights an overestimation of the failure risk by the von Mises criterion. On the contrary, the selected critical plane models, even if based on different damage mechanisms, give a better fatigue life estimation. Further investigations on crack propagation mechanisms of NiTi stents are required to properly select the most reliable fatigue model.

  6. Fatigue Assessment of Nickel-Titanium Peripheral Stents: Comparison of Multi-Axial Fatigue Models

    NASA Astrophysics Data System (ADS)

    Allegretti, Dario; Berti, Francesca; Migliavacca, Francesco; Pennati, Giancarlo; Petrini, Lorenza

    2018-02-01

    Peripheral Nickel-Titanium (NiTi) stents exploit super-elasticity to treat femoropopliteal artery atherosclerosis. The stent is subject to cyclic loads, which may lead to fatigue fracture and treatment failure. The complexity of the loading conditions and device geometry, coupled with the nonlinear material behavior, may induce multi-axial and non-proportional deformation. Finite element analysis can assess the fatigue risk, by comparing the device state of stress with the material fatigue limit. The most suitable fatigue model is not fully understood for NiTi devices, due to its complex thermo-mechanical behavior. This paper assesses the fatigue behavior of NiTi stents through computational models and experimental validation. Four different strain-based models are considered: the von Mises criterion and three critical plane models (Fatemi-Socie, Brown-Miller, and Smith-Watson-Topper models). Two stents, made of the same material with different cell geometries are manufactured, and their fatigue behavior is experimentally characterized. The comparison between experimental and numerical results highlights an overestimation of the failure risk by the von Mises criterion. On the contrary, the selected critical plane models, even if based on different damage mechanisms, give a better fatigue life estimation. Further investigations on crack propagation mechanisms of NiTi stents are required to properly select the most reliable fatigue model.

  7. The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model

    ERIC Educational Resources Information Center

    Choi, In-Hee; Paek, Insu; Cho, Sun-Joo

    2017-01-01

    The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…

  8. Relationships between Classroom Schedule Types and Performance on the Algebra I Criterion-Referenced Test

    ERIC Educational Resources Information Center

    Murray, Gregory V.; Moyer-Packenham, Patricia S.

    2014-01-01

    One option for length of individual mathematics class periods is the schedule type selected for Algebra I classes. This study examined the relationship between student achievement, as indicated by Algebra I Criterion-Referenced Test scores, and the schedule type for Algebra I classes. Data obtained from the Utah State Office of Education included…

  9. Factors Affecting the Identification of Research Problems in Educational Administration Studies

    ERIC Educational Resources Information Center

    Yalçin, Mikail; Bektas, Fatih; Öztekin, Özge; Karadag, Engin

    2016-01-01

    The purpose of this study is to reveal the factors that affect the identification of research problems in educational administration studies. The study was designed using the case study method. Criterion sampling was used to determine the work group; the criterion used to select the participants was that of having a study in the field of…

  10. A hedging point strategy--balancing effluent quality, economy and robustness in the control of wastewater treatment plants.

    PubMed

    Ingildsen, P; Olsson, G; Yuan, Z

    2002-01-01

    An operational space map is an efficient tool to compare a large number of operational strategies to find an optimal choice of setpoints based on a multicriterion. Typically, such a multicriterion includes a weighted sum of cost of operation and effluent quality. Due to the relative high cost of aeration such a definition of optimality result in a relatively high fraction of the effluent total nitrogen in the form of ammonium. Such a strategy may however introduce a risk into operation because a low degree of ammonium removal leads to a low amount of nitrifiers. This in turn leads to a reduced ability to reject event disturbances, such as large variations in the ammonium load, drop in temperature, the presence of toxic/inhibitory compounds in the influent etc. Hedging is a risk minimisation tool, with the aim to "reduce one's risk of loss on a bet or speculation by compensating transactions on the other side" (The Concise Oxford Dictionary (1995)). In wastewater treatment plant operation hedging can be applied by choosing a higher level of ammonium removal to increase the amount of nitrifiers. This is a sensible way to introduce disturbance rejection ability into the multi criterion. In practice, this is done by deciding upon an internal effluent ammonium criterion. In some countries such as Germany, a separate criterion already applies to the level of ammonium in the effluent. However, in most countries the effluent criterion applies to total nitrogen only. In these cases, an internal effluent ammonium criterion should be selected in order to secure proper disturbance rejection ability.

  11. Training set optimization under population structure in genomic selection.

    PubMed

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, Alison; Glebbeek, Evert; Chatterjee, Sourav

    We created artificial color-magnitude diagrams of Monte Carlo dynamical models of globular clusters and then used observational methods to determine the number of blue stragglers in those clusters. We compared these blue stragglers to various cluster properties, mimicking work that has been done for blue stragglers in Milky Way globular clusters to determine the dominant formation mechanism(s) of this unusual stellar population. We find that a mass-based prescription for selecting blue stragglers will select approximately twice as many blue stragglers than a selection criterion that was developed for observations of real clusters. However, the two numbers of blue stragglers aremore » well-correlated, so either selection criterion can be used to characterize the blue straggler population of a cluster. We confirm previous results that the simplified prescription for the evolution of a collision or merger product in the BSE code overestimates their lifetimes. We show that our model blue stragglers follow similar trends with cluster properties (core mass, binary fraction, total mass, collision rate) as the true Milky Way blue stragglers as long as we restrict ourselves to model clusters with an initial binary fraction higher than 5%. We also show that, in contrast to earlier work, the number of blue stragglers in the cluster core does have a weak dependence on the collisional parameter Γ in both our models and in Milky Way globular clusters.« less

  13. Development and application of course-embedded assessment system for program outcome evaluation in the Korean nursing education: A pilot study.

    PubMed

    Park, Jee Won; Seo, Eun Ji; You, Mi-Ae; Song, Ju-Eun

    2016-03-01

    Program outcome evaluation is important because it is an indicator for good quality of education. Course-embedded assessment is one of the program outcome evaluation methods. However, it is rarely used in Korean nursing education. The study purpose was to develop and apply preliminarily a course-embedded assessment system to evaluate one program outcome and to share our experiences. This was a methodological study to develop and apply the course-embedded assessment system based on the theoretical framework in one nursing program in South Korea. Scores for 77 students generated from the three practicum courses were used. The course-embedded assessment system was developed following the six steps suggested by Han's model as follows. 1) One program outcome in the undergraduate program, "nursing process application ability", was selected and 2) the three clinical practicum courses related to the selected program outcome were identified. 3) Evaluation tools including rubric and items were selected for outcome measurement and 4) performance criterion, the educational goal level for the program, was established. 5) Program outcome was actually evaluated using the rubric and evaluation items in the three practicum courses and 6) the obtained scores were analyzed to identify the achievement rate, which was compared with the performance criterion. Achievement rates for the selected program outcome in adult, maternity, and pediatric nursing practicum were 98.7%, 100%, and 66.2% in the case report and 100% for all three in the clinical practice, and 100%, 100%, and 87% respectively for the conference. These are considered as satisfactory levels when compared with the performance criterion of "at least 60% or more". Course-embedded assessment can be used as an effective and economic method to evaluate the program outcome without running an integrative course additionally. Further studies to develop course-embedded assessment systems for other program outcomes in nursing education are needed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Simulation of selected genealogies.

    PubMed

    Slade, P F

    2000-02-01

    Algorithms for generating genealogies with selection conditional on the sample configuration of n genes in one-locus, two-allele haploid and diploid models are presented. Enhanced integro-recursions using the ancestral selection graph, introduced by S. M. Krone and C. Neuhauser (1997, Theor. Popul. Biol. 51, 210-237), which is the non-neutral analogue of the coalescent, enables accessible simulation of the embedded genealogy. A Monte Carlo simulation scheme based on that of R. C. Griffiths and S. Tavaré (1996, Math. Comput. Modelling 23, 141-158), is adopted to consider the estimation of ancestral times under selection. Simulations show that selection alters the expected depth of the conditional ancestral trees, depending on a mutation-selection balance. As a consequence, branch lengths are shown to be an ineffective criterion for detecting the presence of selection. Several examples are given which quantify the effects of selection on the conditional expected time to the most recent common ancestor. Copyright 2000 Academic Press.

  15. Selected mode of dendritic growth with n-fold symmetry in the presence of a forced flow

    NASA Astrophysics Data System (ADS)

    Alexandrov, D. V.; Galenko, P. K.

    2017-07-01

    The effect of n-fold crystal symmetry is investigated for a two-dimensional stable dendritic growth in the presence of a forced convective flow. We consider dendritic growth in a one-component undercooled liquid. The theory is developed for the parabolic solid-liquid surface of dendrite growing at arbitrary growth Péclet numbers keeping in mind small anisotropies of surface energy and growth kinetics. The selection criterion determining the stable growth velocity of the dendritic tip and its stable tip diameter is found on the basis of solvability analysis. The obtained criterion includes previously developed theories of thermally and kinetically controlled dendritic growth with convection for the case of four-fold crystal symmetry. The obtained nonlinear system of equations (representing the selection criterion and undercooling balance) for the determination of dendrite tip velocity and dendrite tip diameter is analytically solved in a parametric form. These exact solutions clearly demonstrate a transition between thermally and kinetically controlled growth regimes. In addition, we show that the dendrites with larger crystal symmetry grow faster than those with smaller symmetry.

  16. Video-task acquisition in rhesus monkeys (Macaca mulatta) and chimpanzees (Pan troglodytes): a comparative analysis

    NASA Technical Reports Server (NTRS)

    Hopkins, W. D.; Washburn, D. A.; Hyatt, C. W.; Rumbaugh, D. M. (Principal Investigator)

    1996-01-01

    This study describes video-task acquisition in two nonhuman primate species. The subjects were seven rhesus monkeys (Macaca mulatta) and seven chimpanzees (Pan troglodytes). All subjects were trained to manipulate a joystick which controlled a cursor displayed on a computer monitor. Two criterion levels were used: one based on conceptual knowledge of the task and one based on motor performance. Chimpanzees and rhesus monkeys attained criterion in a comparable number of trials using a conceptually based criterion. However, using a criterion based on motor performance, chimpanzees reached criterion significantly faster than rhesus monkeys. Analysis of error patterns and latency indicated that the rhesus monkeys had a larger asymmetry in response bias and were significantly slower in responding than the chimpanzees. The results are discussed in terms of the relation between object manipulation skills and video-task acquisition.

  17. Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints

    NASA Astrophysics Data System (ADS)

    Yan, Wei

    2012-01-01

    An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.

  18. Criterion I: Soil and water conservation on rangelands [Chapter 2

    Treesearch

    Michael G. (Sherm) Karl; Paul T. Tueller; Gerald E. Schuman; Mark R. Vinson; James L. Fogg; Ronald W. Shafer; David A. Pyke; D. Terrance Booth; Steven J. Borchard; William G. Ypsilantis; Richard H. Barrett

    2010-01-01

    The Sustainable Rangelands Roundtable (SRR) has explicitly included conservation and maintenance of soil and water resources as a criterion of rangeland sustainability. Within the soil/water criterion, 10 indicators ­ five soil-based and five water-based - were developed through the expert opinions of rangeland scientists, rangeland management agency personnel, non-...

  19. Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.

    PubMed

    Pehkonen, Petri; Wong, Garry; Törönen, Petri

    2010-01-01

    Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.

  20. Administrative Process and Criteria Ranking for Drug Entering Health Insurance List in Iran-TOPSIS-Based Consensus Model.

    PubMed

    Viyanchi, Amir; Rajabzadeh Ghatari, Ali; Rasekh, Hamid Reza; SafiKhani, HamidReza

    2016-01-01

    The purposes of our study were to identify a drug entry process, collect, and prioritize criteria for selecting drugs for the list of basic health insurance commitments to prepare an "evidence based reimbursement eligibility plan" in Iran. The 128 noticeable criteria were found when studying the health insurance systems of developed countries. Four parts (involving criteria) formed the first questionnaire: evaluation of evidences quality, clinical evaluation, economic evaluation, and managerial appraisal. The 85 experts (purposed sampling) were asked to mark the importance of each criterion from 1 to 100 as 1 representing the least and 100 the most important criterion and 45 out of them replied completely. Then, in the next questionnaire, we evaluated the 48 remainder criteria by the same45 participants under four sub-criteria (Cost calculation simplicity, Interpretability, Precision, and Updating capability of a criterion). After collecting the replies, the remainder criteria were ranked by TOPSIS method. Softwares "SPSS" 17 and Excel 2007 were used. The ranks of the five most important criteria which were found for drug approval based on TOPSIS are as follows: 1-domestic production (0.556), 2-duration of using (0.399), 3-independence of the assessment group (0.363) 4-impact budgeting (0.362) 5-decisions of other countries about the same drug (0.358). The numbers in parenthesis are relative closeness alternatives in relation to the ideal solution. This model gave a scientific model for judging fairly on the acceptance of novelty medicines.

  1. New bandwidth selection criterion for Kernel PCA: approach to dimensionality reduction and classification problems.

    PubMed

    Thomas, Minta; De Brabanter, Kris; De Moor, Bart

    2014-05-10

    DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.

  2. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  3. Fundamental Vocabulary Selection Based on Word Familiarity

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi; Kasahara, Kaname; Kanasugi, Tomoko; Amano, Shigeaki

    This paper proposes a new method for selecting fundamental vocabulary. We are presently constructing the Fundamental Vocabulary Knowledge-base of Japanese that contains integrated information on syntax, semantics and pragmatics, for the purposes of advanced natural language processing. This database mainly consists of a lexicon and a treebank: Lexeed (a Japanese Semantic Lexicon) and the Hinoki Treebank. Fundamental vocabulary selection is the first step in the construction of Lexeed. The vocabulary should include sufficient words to describe general concepts for self-expandability, and should not be prohibitively large to construct and maintain. There are two conventional methods for selecting fundamental vocabulary. The first is intuition-based selection by experts. This is the traditional method for making dictionaries. A weak point of this method is that the selection strongly depends on personal intuition. The second is corpus-based selection. This method is superior in objectivity to intuition-based selection, however, it is difficult to compile a sufficiently balanced corpora. We propose a psychologically-motivated selection method that adopts word familiarity as the selection criterion. Word familiarity is a rating that represents the familiarity of a word as a real number ranging from 1 (least familiar) to 7 (most familiar). We determined the word familiarity ratings statistically based on psychological experiments over 32 subjects. We selected about 30,000 words as the fundamental vocabulary, based on a minimum word familiarity threshold of 5. We also evaluated the vocabulary by comparing its word coverage with conventional intuition-based and corpus-based selection over dictionary definition sentences and novels, and demonstrated the superior coverage of our lexicon. Based on this, we conclude that the proposed method is superior to conventional methods for fundamental vocabulary selection.

  4. Water-sediment controversy in setting environmental standards for selenium

    USGS Publications Warehouse

    Hamilton, Steven J.; Lemly, A. Dennis

    1999-01-01

    A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 μg/L. This commentary will present information to support a national water quality criterion for selenium of 2 μg/L, based on a wide array of support from federal, state, university, and international sources. Recently, two articles have argued for a sediment-based criterion and presented a model for deriving site-specific criteria. In one example, they calculate a criterion of 31 μg/L for a stream with a low sediment selenium toxicity threshold and low site-specific sediment total organic carbon content, which is substantially higher than the national criterion of 5 μg/L. Their basic premise for proposing a sediment-based method has been critically reviewed and problems in their approach are discussed.

  5. Test Design Optimization in CAT Early Stage with the Nominal Response Model

    ERIC Educational Resources Information Center

    Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E.

    2007-01-01

    The early stage of computerized adaptive testing (CAT) refers to the phase of the trait estimation during the administration of only a few items. This phase can be characterized by bias and instability of estimation. In this study, an item selection criterion is introduced in an attempt to lessen this instability: the D-optimality criterion. A…

  6. The Development of a Model for Construction of Criterion Referenced System Achievement Tests for the Strategic Weapon System Training Program.

    ERIC Educational Resources Information Center

    Cantor, Jeffrey A.; Hobson, Edward N.

    The development of a test design methodology used to construct a criterion-referenced System Achievement Test (CR-SAT) for selected Naval enlisted classification (NEC) in the Strategic Weapon System (SWS) of the United States Navy is described. Subject matter experts, training data analysts and educational specialists developed a comprehensive…

  7. Interaction Effects Between Selected Cognitive Abilities and Instructional Treatment in Algebra: An ATI Study.

    ERIC Educational Resources Information Center

    Webb, Leland F.

    The purpose of this study was to confirm or deny Carry's findings in an earlier Aptitude Treatment Interaction (ATI) study by implementing his suggestions to: (1) revise instructional treatments, (2) improve the criterion measures, (3) use four predictor tests, (4) add time to criterion measure, and (5) use a theoretical model to identify relevant…

  8. Effect of Items Direction (Positive or Negative) on the Factorial Construction and Criterion Related Validity in Likert Scale

    ERIC Educational Resources Information Center

    Naji Qasem, Mamun Ali; Ahmad Gul, Showkeen Bilal

    2014-01-01

    The study was conducted to know the effect of items direction (positive or negative) on the factorial construction and criterion related validity in Likert scale. The descriptive survey research method was used for the study and the sample consisted of 510 undergraduate students selected by used random sampling technique. A scale developed by…

  9. Using Norm-Referenced Data to Set Standards for a Minimum Competency Program in the State of South Carolina.

    ERIC Educational Resources Information Center

    Garcia-Quintana, Roan A.; Mappus, M. Lynne

    1980-01-01

    Norm referenced data were utilized for determining the mastery cutoff score on a criterion referenced test. Once a cutoff score on the norm referenced measure is selected, the cutoff score on the criterion referenced measure becomes that score which maximizes proportion of consistent classifications and proportion of improvement beyond change. (CP)

  10. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  11. Retrieving relevant factors with exploratory SEM and principal-covariate regression: A comparison.

    PubMed

    Vervloet, Marlies; Van den Noortgate, Wim; Ceulemans, Eva

    2018-02-12

    Behavioral researchers often linearly regress a criterion on multiple predictors, aiming to gain insight into the relations between the criterion and predictors. Obtaining this insight from the ordinary least squares (OLS) regression solution may be troublesome, because OLS regression weights show only the effect of a predictor on top of the effects of other predictors. Moreover, when the number of predictors grows larger, it becomes likely that the predictors will be highly collinear, which makes the regression weights' estimates unstable (i.e., the "bouncing beta" problem). Among other procedures, dimension-reduction-based methods have been proposed for dealing with these problems. These methods yield insight into the data by reducing the predictors to a smaller number of summarizing variables and regressing the criterion on these summarizing variables. Two promising methods are principal-covariate regression (PCovR) and exploratory structural equation modeling (ESEM). Both simultaneously optimize reduction and prediction, but they are based on different frameworks. The resulting solutions have not yet been compared; it is thus unclear what the strengths and weaknesses are of both methods. In this article, we focus on the extents to which PCovR and ESEM are able to extract the factors that truly underlie the predictor scores and can predict a single criterion. The results of two simulation studies showed that for a typical behavioral dataset, ESEM (using the BIC for model selection) in this regard is successful more often than PCovR. Yet, in 93% of the datasets PCovR performed equally well, and in the case of 48 predictors, 100 observations, and large differences in the strengths of the factors, PCovR even outperformed ESEM.

  12. Learning to rank atlases for multiple-atlas segmentation.

    PubMed

    Sanroma, Gerard; Wu, Guorong; Gao, Yaozong; Shen, Dinggang

    2014-10-01

    Recently, multiple-atlas segmentation (MAS) has achieved a great success in the medical imaging area. The key assumption is that multiple atlases have greater chances of correctly labeling a target image than a single atlas. However, the problem of atlas selection still remains unexplored. Traditionally, image similarity is used to select a set of atlases. Unfortunately, this heuristic criterion is not necessarily related to the final segmentation performance. To solve this seemingly simple but critical problem, we propose a learning-based atlas selection method to pick up the best atlases that would lead to a more accurate segmentation. Our main idea is to learn the relationship between the pairwise appearance of observed instances (i.e., a pair of atlas and target images) and their final labeling performance (e.g., using the Dice ratio). In this way, we select the best atlases based on their expected labeling accuracy. Our atlas selection method is general enough to be integrated with any existing MAS method. We show the advantages of our atlas selection method in an extensive experimental evaluation in the ADNI, SATA, IXI, and LONI LPBA40 datasets. As shown in the experiments, our method can boost the performance of three widely used MAS methods, outperforming other learning-based and image-similarity-based atlas selection methods.

  13. Empirical extensions of the lasso penalty to reduce the false discovery rate in high-dimensional Cox regression models.

    PubMed

    Ternès, Nils; Rotolo, Federico; Michiels, Stefan

    2016-07-10

    Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan

    To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.

  15. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  16. Guidelines and Parameter Selection for the Simulation of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Song, Kyongchan; Davila, Carlos G.; Rose, Cheryl A.

    2008-01-01

    Turon s methodology for determining optimal analysis parameters for the simulation of progressive delamination is reviewed. Recommended procedures for determining analysis parameters for efficient delamination growth predictions using the Abaqus/Standard cohesive element and relatively coarse meshes are provided for single and mixed-mode loading. The Abaqus cohesive element, COH3D8, and a user-defined cohesive element are used to develop finite element models of the double cantilever beam specimen, the end-notched flexure specimen, and the mixed-mode bending specimen to simulate progressive delamination growth in Mode I, Mode II, and mixed-mode fracture, respectively. The predicted responses are compared with their analytical solutions. The results show that for single-mode fracture, the predicted responses obtained with the Abaqus cohesive element correlate well with the analytical solutions. For mixed-mode fracture, it was found that the response predicted using COH3D8 elements depends on the damage evolution criterion that is used. The energy-based criterion overpredicts the peak loads and load-deflection response. The results predicted using a tabulated form of the BK criterion correlate well with the analytical solution and with the results predicted with the user-written element.

  17. A pattern of contractor selection for oil and gas industries in a safety approach using ANP-DEMATEL in a Grey environment.

    PubMed

    Gharedaghi, Gholamreza; Omidvari, Manouchehr

    2018-01-11

    Contractor selection is one of the major concerns of industry managers such as those in the oil industry. The objective of this study was to determine a contractor selection pattern for oil and gas industries in a safety approach. Assessment of contractors based on specific criteria and ultimately selecting an eligible contractor preserves the organizational resources. Due to the safety risks involved in the oil industry, one of the major criteria of contractor selection considered by managers today is safety. The results indicated that the most important safety criterion of contractor selection was safety records and safety investments. This represented the industry's risks and the impact of safety training and investment on the performance of other sectors and the overall organization. The output of this model could be useful in the safety risk assessment process in the oil industry and other industries.

  18. Quantitative comparison of the absorption spectra of the gas mixtures in analogy to the criterion of Pearson

    NASA Astrophysics Data System (ADS)

    Kistenev, Yu. V.; Kuzmin, D. A.; Sandykova, E. A.; Shapovalov, A. V.

    2015-11-01

    An approach to the reduction of the space of the absorption spectra, based on the original criterion for profile analysis of the spectra, was proposed. This criterion dates back to the known statistics chi-square test of Pearson. Introduced criterion allows to quantify the differences of spectral curves.

  19. Convergent, discriminant, and criterion validity of DSM-5 traits.

    PubMed

    Yalch, Matthew M; Hopwood, Christopher J

    2016-10-01

    Section III of the Diagnostic and Statistical Manual of Mental Disorders (5th edi.; DSM-5; American Psychiatric Association, 2013) contains a system for diagnosing personality disorder based in part on assessing 25 maladaptive traits. Initial research suggests that this aspect of the system improves the validity and clinical utility of the Section II Model. The Computer Adaptive Test of Personality Disorder (CAT-PD; Simms et al., 2011) contains many similar traits as the DSM-5, as well as several additional traits seemingly not covered in the DSM-5. In this study we evaluate the convergent and discriminant validity between the DSM-5 traits, as assessed by the Personality Inventory for DSM-5 (PID-5; Krueger et al., 2012), and CAT-PD in an undergraduate sample, and test whether traits included in the CAT-PD but not the DSM-5 provide incremental validity in association with clinically relevant criterion variables. Results supported the convergent and discriminant validity of the PID-5 and CAT-PD scales in their assessment of 23 out of 25 DSM-5 traits. DSM-5 traits were consistently associated with 11 criterion variables, despite our having intentionally selected clinically relevant criterion constructs not directly assessed by DSM-5 traits. However, the additional CAT-PD traits provided incremental information above and beyond the DSM-5 traits for all criterion variables examined. These findings support the validity of pathological trait models in general and the DSM-5 and CAT-PD models in particular, while also suggesting that the CAT-PD may include additional traits for consideration in future iterations of the DSM-5 system. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Criterion-Referenced Measurement in the Army: Development of a Research- Based, Practical, Test Construction Manual

    DTIC Science & Technology

    1978-09-01

    iE ARI TECHNICAL REPORT S~ TR-78-A31 M CCriterion-Reforencod Loasurement In the Army: Development of a Research-Based, Practical, Test Construction ...conducted to develop a Criterion- 1 Referenced Tests (CRTs) Construction Manual. Major accomplishments were the preparation of a written review of the...survey of the literature on Criterion-Referenced Testing’ conducted in order to provide an information base for development of the CRT Construction

  1. STOL propulsion systems

    NASA Technical Reports Server (NTRS)

    Denington, R. J.; Koenig, R. W.; Vanco, M. R.; Sagerser, D. A.

    1972-01-01

    The selection and the characteristics of quiet, clean propulsion systems for STOL aircraft are discussed. Engines are evaluated for augmentor wing and externally blown flap STOL aircraft with the engines located both under and over the wings. Some supporting test data are presented. Optimum engines are selected based on achieving the performance, economic, acoustic, and pollution goals presently being considered for future STOL aircraft. The data and results presented were obtained from a number of contracted studies and some supporting NASA inhouse programs, most of which began in early 1972. The contracts include: (1) two aircraft and mission studies, (2) two propulsion system studies, (3) the experimental and analytic work on the augmentor wing, and (4) the experimental programs on Q-Fan. Engines are selected and discussed based on aircraft economics using the direct operating cost as the primary criterion. This cost includes the cost of the crew, fuel, aircraft, and engine maintenance and depreciation.

  2. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models.

    PubMed

    Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling

    2013-07-04

    Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.

  3. Two criteria for the selection of assembly plans - Maximizing the flexibility of sequencing the assembly tasks and minimizing the assembly time through parallel execution of assembly tasks

    NASA Technical Reports Server (NTRS)

    Homem De Mello, Luiz S.; Sanderson, Arthur C.

    1991-01-01

    The authors introduce two criteria for the evaluation and selection of assembly plans. The first criterion is to maximize the number of different sequences in which the assembly tasks can be executed. The second criterion is to minimize the total assembly time through simultaneous execution of assembly tasks. An algorithm that performs a heuristic search for the best assembly plan over the AND/OR graph representation of assembly plans is discussed. Admissible heuristics for each of the two criteria introduced are presented. Some implementation issues that affect the computational efficiency are addressed.

  4. Wavenumber selection in Benard convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catton, I.

    1988-11-01

    The results of three related studies dealing with wavenumber selection in Rayleigh--Benard convection are reported. The first, an extension of the power integral method, is used to argue for the existence of multi-wavenumbers at all supercritical wavenumbers. Most existing closure schemes are shown to be inadequate. A thermodynamic stability criterion is shown to give reasonable results but requires empirical measurement of one parameter for closure. The third study uses an asymptotic approach based in part on geometric considerations and requires no empiricism to obtain good predictions of the wavenumber. These predictions, however, can only be used for certain planforms ofmore » convection.« less

  5. High-performance composite chocolate

    NASA Astrophysics Data System (ADS)

    Dean, Julian; Thomson, Katrin; Hollands, Lisa; Bates, Joanna; Carter, Melvyn; Freeman, Colin; Kapranos, Plato; Goodall, Russell

    2013-07-01

    The performance of any engineering component depends on and is limited by the properties of the material from which it is fabricated. It is crucial for engineering students to understand these material properties, interpret them and select the right material for the right application. In this paper we present a new method to engage students with the material selection process. In a competition-based practical, first-year undergraduate students design, cost and cast composite chocolate samples to maximize a particular performance criterion. The same activity could be adapted for any level of education to introduce the subject of materials properties and their effects on the material chosen for specific applications.

  6. Determination of Air Enthalpy Based on Meteorological Data as an Indicator for Heat Stress Assessment in Occupational Outdoor Environments, a Field Study in IRAN.

    PubMed

    Heidari, Hamidreza; Golbabaei, Farideh; Shamsipour, Aliakbar; Rahimi Forushani, Abbas; Gaeini, Abbasali

    2016-01-01

    Heat stress evaluation and timely notification, especially using meteorological data is an important issue attracted attention in recent years. Therefore, this study aimed at answering the following research questions: 1) can enthalpy as a common environmental parameter reported by meteorological agencies be applied accurately for evaluation of thermal condition of outdoor settings, and 2) if so, what is it's the best criterion to detect areas in stress or stress-free situations, separately. Nine climatic regions were selected throughout Iran covering a wide variety of climatic conditions like those, which exist around the world. Three types of parameters including measured (ta, RH, Pa and WBGT), estimated (metabolic rate and cloth thermal insulation), and calculated parameters (enthalpy and effective WBGT) were recorded for 1452 different situations. Enthalpy as a new indicator in this research was compared to WBGT in selected regions. Altogether, a good consistency was obtained between enthalpy and WBGT in selected regions (Kappa value: 0.815). Based on the good ROC curve obtained using MedCal software, the criterion of the values more than 74.24 for the new index was determined to explain heat stress situation for outdoor environments. Because of simplicity in measurement, applicability of the indicator for weather agencies, the consistency observed between enthalpy and a valid as well as accurate index (WBGT), sensor requirements which take only a few seconds to reach equilibrium and so on, enthalpy indicator can be introduced and applied as a good substitute for WBGT for outdoor settings.

  7. Improving data analysis in herpetology: Using Akaike's information criterion (AIC) to assess the strength of biological hypotheses

    USGS Publications Warehouse

    Mazerolle, M.J.

    2006-01-01

    In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.

  8. Optimization of multi-environment trials for genomic selection based on crop models.

    PubMed

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  9. Towards a new tool for the evaluation of the quality of ultrasound compressed images.

    PubMed

    Delgorge, Cécile; Rosenberger, Christophe; Poisson, Gérard; Vieyres, Pierre

    2006-11-01

    This paper presents a new tool for the evaluation of ultrasound image compression. The goal is to measure the image quality as easily as with a statistical criterion, and with the same reliability as the one provided by the medical assessment. An initial experiment is proposed to medical experts and represents our reference value for the comparison of evaluation criteria. Twenty-one statistical criteria are selected from the literature. A cumulative absolute similarity measure is defined as a distance between the criterion to evaluate and the reference value. A first fusion method based on a linear combination of criteria is proposed to improve the results obtained by each of them separately. The second proposed approach combines different statistical criteria and uses the medical assessment in a training phase with a support vector machine. Some experimental results are given and show the benefit of fusion.

  10. Survey of Foods to Improve Logistic Support and Extend Mission Endurance of Submarines

    DTIC Science & Technology

    1981-12-01

    lower priority. For example, in the computer run, Chicken , Frozen, Broiler -Fryer, Whole (National Stock Number (NSN) 8905-00-126-3416) is specified...Increase ration density 1 (Decrease Logistical costs 2 Reduce number of line items 3 Conserve onboard refrigeration space 4 i Reduce food service labor...selection priority is based primarily on the ability of the substitute item to achieve increased ration density , and each subsequent I criterion has a

  11. A second perspective on the Amann-Schmiedl-Seifert criterion for non-equilibrium in a three-state system

    NASA Astrophysics Data System (ADS)

    Jia, Chen; Chen, Yong

    2015-05-01

    In the work of Amann, Schmiedl and Seifert (2010 J. Chem. Phys. 132 041102), the authors derived a sufficient criterion to identify a non-equilibrium steady state (NESS) in a three-state Markov system based on the coarse-grained information of two-state trajectories. In this paper, we present a mathematical derivation and provide a probabilistic interpretation of the Amann-Schmiedl-Seifert (ASS) criterion. Moreover, the ASS criterion is compared with some other criterions for a NESS.

  12. An Independent and Coordinated Criterion for Kinematic Aircraft Maneuvers

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.; Hagen, George

    2014-01-01

    This paper proposes a mathematical definition of an aircraft-separation criterion for kinematic-based horizontal maneuvers. It has been formally proved that kinematic maneu- vers that satisfy the new criterion are independent and coordinated for repulsiveness, i.e., the distance at closest point of approach increases whether one or both aircraft maneuver according to the criterion. The proposed criterion is currently used in NASA's Airborne Coordinated Resolution and Detection (ACCoRD) set of tools for the design and analysis of separation assurance systems.

  13. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  14. Quantization selection in the high-throughput H.264/AVC encoder based on the RD

    NASA Astrophysics Data System (ADS)

    Pastuszak, Grzegorz

    2013-10-01

    In the hardware video encoder, the quantization is responsible for quality losses. On the other hand, it allows the reduction of bit rates to the target one. If the mode selection is based on the rate-distortion criterion, the quantization can also be adjusted to obtain better compression efficiency. Particularly, the use of Lagrangian function with a given multiplier enables the encoder to select the most suitable quantization step determined by the quantization parameter QP. Moreover, the quantization offset added before discarding the fraction value after quantization can be adjusted. In order to select the best quantization parameter and offset in real time, the HD/SD encoder should be implemented in the hardware. In particular, the hardware architecture should embed the transformation and quantization modules able to process the same residuals many times. In this work, such an architecture is used. Experimental results show what improvements in terms of compression efficiency are achievable for Intra coding.

  15. Optimal Parameter Design of Coarse Alignment for Fiber Optic Gyro Inertial Navigation System.

    PubMed

    Lu, Baofeng; Wang, Qiuying; Yu, Chunmei; Gao, Wei

    2015-06-25

    Two different coarse alignment algorithms for Fiber Optic Gyro (FOG) Inertial Navigation System (INS) based on inertial reference frame are discussed in this paper. Both of them are based on gravity vector integration, therefore, the performance of these algorithms is determined by integration time. In previous works, integration time is selected by experience. In order to give a criterion for the selection process, and make the selection of the integration time more accurate, optimal parameter design of these algorithms for FOG INS is performed in this paper. The design process is accomplished based on the analysis of the error characteristics of these two coarse alignment algorithms. Moreover, this analysis and optimal parameter design allow us to make an adequate selection of the most accurate algorithm for FOG INS according to the actual operational conditions. The analysis and simulation results show that the parameter provided by this work is the optimal value, and indicate that in different operational conditions, the coarse alignment algorithms adopted for FOG INS are different in order to achieve better performance. Lastly, the experiment results validate the effectiveness of the proposed algorithm.

  16. [Silvicultural treatments and their selection effects].

    PubMed

    Vincent, G

    1973-01-01

    Selection can be defined in terms of its observable consequences as the non random differential reproduction of genotypes (Lerner 1958). In the forest stands we are selecting during the improvements-fellings and reproduction treatments the individuals surpassing in growth or in production of first-class timber. However the silvicultural treatments taken in forest stands guarantee a permanent increase of forest production only in such cases, if they have been taken with respect to the principles of directional (dynamic) selection. These principles require that the trees determined for further growing and for forest regeneration are selected by their hereditary properties, i.e. by their genotypes.For making this selection feasible, our study deals with the genetic parameters and gives some examples of the application of the response, the selection differential, the heritability in the narrow and in the broad sense, as well as of the genetic and genotypic gain. On the strength of this parameter we have the possibility to estimate the economic success of several silvicultural treatments in forest stands.The mentioned examples demonstrate that the selection measures of a higher intensity will be manifested in a higher selection differential, in a higher genetic and genotypic gain and that the mentioned measures show more distinct effects in the variable populations - in natural forest - than in the population characteristic by a smaller variability, e.g. in many uniform artificially established stands.The examples of influences of different selection on the genotypes composition of population prove that genetics instructs us to differentiate the different genotypes of the same species and gives us at the same time a new criterions for evaluating selectional treatments. These criterions from economic point of view is necessary to consider in silviculture as advantageous even for the reason that we can judge from these criterions the genetical composition of forest stands in the following generation, it means, within the scope of time for more than a human age.

  17. Characterizing the functional MRI response using Tikhonov regularization.

    PubMed

    Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E

    2007-09-20

    The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd

  18. Photostick: a method for selective isolation of target cells from culture† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c4sc03676j Click here for additional data file.

    PubMed Central

    Chien, Miao-Ping; Werley, Christopher A.; Farhi, Samouil L.

    2015-01-01

    Sorting of target cells from a heterogeneous pool is technically difficult when the selection criterion is complex, e.g. a dynamic response, a morphological feature, or a combination of multiple parameters. At present, mammalian cell selections are typically performed either via static fluorescence (e.g. fluorescence activated cell sorter), via survival (e.g. antibiotic resistance), or via serial operations (flow cytometry, laser capture microdissection). Here we present a simple protocol for selecting cells based on any static or dynamic property that can be identified by video microscopy and image processing. The “photostick” technique uses a cell-impermeant photochemical crosslinker and digital micromirror array-based patterned illumination to immobilize selected cells on the culture dish. Other cells are washed away with mild protease treatment. The crosslinker also labels the selected cells with a fluorescent dye and a biotin for later identification. The photostick protocol preserves cell viability, permits genetic profiling of selected cells, and can be performed with complex functional selection criteria such as neuronal firing patterns. PMID:25705368

  19. A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities.

    PubMed

    Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah

    2018-02-01

    Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.

  20. A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities

    NASA Astrophysics Data System (ADS)

    Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah

    2018-02-01

    Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.

  1. DOES THE INCLUSION CRITERION OF WOMEN’S AGGRESSION AS OPPOSED TO THEIR VICTIMIZATION RESULT IN SAMPLES THAT DIFFER ON KEY DIMENSIONS OF INTIMATE PARTNER VIOLENCE?

    PubMed Central

    Sullivan, Tami P.; Titus, Jennifer A.; Holt, Laura J.; Swan, Suzanne C.; Fisher, Bonnie S.; Snow, David L.

    2010-01-01

    This study is among the first attempts to address a frequently articulated, yet unsubstantiated claim that sample inclusion criterion based on women’s physical aggression or victimization will yield different distributions of severity and type of partner violence and injury. Independent samples of African-American women participated in separate studies based on either inclusion criterion of women’s physical aggression or victimization. Between-groups comparisons showed that samples did not differ in physical, sexual, or psychological aggression; physical, sexual, or psychological victimization; inflicted or sustained injury. Therefore, inclusion criterion based on physical aggression or victimization did not yield unique samples of “aggressors” and “victims.” PMID:19949230

  2. Entanglement criterion for tripartite systems based on local sum uncertainty relations

    NASA Astrophysics Data System (ADS)

    Akbari-Kourbolagh, Y.; Azhdargalam, M.

    2018-04-01

    We propose a sufficient criterion for the entanglement of tripartite systems based on local sum uncertainty relations for arbitrarily chosen observables of subsystems. This criterion generalizes the tighter criterion for bipartite systems introduced by Zhang et al. [C.-J. Zhang, H. Nha, Y.-S. Zhang, and G.-C. Guo, Phys. Rev. A 81, 012324 (2010), 10.1103/PhysRevA.81.012324] and can be used for both discrete- and continuous-variable systems. It enables us to detect the entanglement of quantum states without having a complete knowledge of them. Its utility is illustrated by some examples of three-qubit, qutrit-qutrit-qubit, and three-mode Gaussian states. It is found that, in comparison with other criteria, this criterion is able to detect some three-qubit bound entangled states more efficiently.

  3. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  4. Do the Hard Things First: A Randomized Controlled Trial Testing the Effects of Exemplar Selection on Generalization Following Therapy for Grammatical Morphology

    PubMed Central

    Fey, Marc; Curran, Maura

    2017-01-01

    Purpose Complexity-based approaches to treatment have been gaining popularity in domains such as phonology and aphasia but have not yet been tested in child morphological acquisition. In this study, we examined whether beginning treatment with easier-to-inflect (easy first) or harder-to-inflect (hard first) verbs led to greater progress in the production of regular past-tense –ed by children with developmental language disorder. Method Eighteen children with developmental language disorder (ages 4–10) participated in a randomized controlled trial (easy first, N = 10, hard first, N = 8). Verbs were selected on the basis of frequency, phonological complexity, and telicity (i.e., the completedness of the event). Progress was measured by the duration of therapy, number of verb lists trained to criterion, and pre/post gains in accuracy for trained and untrained verbs on structured probes. Results The hard-first group made greater gains in accuracy on both trained and untrained verbs but did not have fewer therapy visits or train to criterion on more verb lists than the easy-first group. Treatment fidelity, average recasts per session, and verbs learned did not differ across conditions. Conclusion When targeting grammatical morphemes, it may be most efficient for clinicians to select harder rather than easier exemplars of the target. PMID:28796874

  5. Insurees' preferences in hospital choice-A population-based study.

    PubMed

    Schuldt, Johannes; Doktor, Anna; Lichters, Marcel; Vogt, Bodo; Robra, Bernt-Peter

    2017-10-01

    In Germany, the patient himself makes the choice for or against a health service provider. Hospital comparison websites offer him possibilities to inform himself before choosing. However, it remains unclear, how health care consumers use those websites, and there is little information about how preferences in hospital choice differ interpersonally. We conducted a Discrete-Choice-Experiment (DCE) on hospital choice with 1500 randomly selected participants (age 40-70) in three different German cities selecting four attributes for hospital vignettes. The analysis of the study draws on multilevel mixed effects logit regression analyses with the dependent variables: "chance to select a hospital" and "choice confidence". Subsequently, we performed a Latent-Class-Analysis to uncover consumer segments with distinct preferences. 590 of the questionnaires were evaluable. All four attributes of the hospital vignettes have a significant impact on hospital choice. The attribute "complication rate" exerts the highest impact on consumers' decisions and reported choice confidence. Latent-Class-Analysis results in one dominant consumer segment that considered the complication rate the most important decision criterion. Using DCE, we were able to show that the complication rate is an important trusted criterion in hospital choice to a large group of consumers. Our study supports current governmental efforts in Germany to concentrate the provision of specialized health care services. We suggest further national and cross-national research on the topic. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Predicting the required number of training samples. [for remotely sensed image data based on covariance matrix estimate quality criterion of normal distribution

    NASA Technical Reports Server (NTRS)

    Kalayeh, H. M.; Landgrebe, D. A.

    1983-01-01

    A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109

  7. A parameter for the assessment of the segmentation of TEM tomography reconstructed volumes based on mutual information.

    PubMed

    Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen

    2017-12-01

    A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Quality of service routing in the differentiated services framework

    NASA Astrophysics Data System (ADS)

    Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo

    2001-02-01

    In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.

  9. Towards a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Hilburger, Mark W.

    2003-01-01

    A probability-based analysis method for predicting buckling loads of compression-loaded laminated-composite shells is presented, and its potential as a basis for a new shell-stability design criterion is demonstrated and discussed. In particular, a database containing information about specimen geometry, material properties, and measured initial geometric imperfections for a selected group of laminated-composite cylindrical shells is used to calculate new buckling-load "knockdown factors". These knockdown factors are shown to be substantially improved, and hence much less conservative than the corresponding deterministic knockdown factors that are presently used by industry. The probability integral associated with the analysis is evaluated by using two methods; that is, by using the exact Monte Carlo method and by using an approximate First-Order Second- Moment method. A comparison of the results from these two methods indicates that the First-Order Second-Moment method yields results that are conservative for the shells considered. Furthermore, the results show that the improved, reliability-based knockdown factor presented always yields a safe estimate of the buckling load for the shells examined.

  10. A discrete artificial bee colony algorithm incorporating differential evolution for the flow-shop scheduling problem with blocking

    NASA Astrophysics Data System (ADS)

    Han, Yu-Yan; Gong, Dunwei; Sun, Xiaoyan

    2015-07-01

    A flow-shop scheduling problem with blocking has important applications in a variety of industrial systems but is underrepresented in the research literature. In this study, a novel discrete artificial bee colony (ABC) algorithm is presented to solve the above scheduling problem with a makespan criterion by incorporating the ABC with differential evolution (DE). The proposed algorithm (DE-ABC) contains three key operators. One is related to the employed bee operator (i.e. adopting mutation and crossover operators of discrete DE to generate solutions with good quality); the second is concerned with the onlooker bee operator, which modifies the selected solutions using insert or swap operators based on the self-adaptive strategy; and the last is for the local search, that is, the insert-neighbourhood-based local search with a small probability is adopted to improve the algorithm's capability in exploitation. The performance of the proposed DE-ABC algorithm is empirically evaluated by applying it to well-known benchmark problems. The experimental results show that the proposed algorithm is superior to the compared algorithms in minimizing the makespan criterion.

  11. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  12. Distance Measurements In X-Ray Pictures

    NASA Astrophysics Data System (ADS)

    Forsgren, Per-Ola

    1987-10-01

    In this paper, a measurement method for the distance between binary objects will be presented. It has been developed for a specific purpose, the evaluation of rheumatic disease, but should be useful also in other applications. It is based on a distance map in the area between binary objects. A skeleton is extracted from the distance map by searching for local maxima. The distance measure is based on the average of skelton points in a defined measurement area. An objective criterion for selection of measurement points on the skeleton is proposed. Preliminary results indicate that good repeatability is attained.

  13. A general framework to learn surrogate relevance criterion for atlas based image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Tingting; Ruan, Dan

    2016-09-01

    Multi-atlas based image segmentation sees great opportunities in the big data era but also faces unprecedented challenges in identifying positive contributors from extensive heterogeneous data. To assess data relevance, image similarity criteria based on various image features widely serve as surrogates for the inaccessible geometric agreement criteria. This paper proposes a general framework to learn image based surrogate relevance criteria to better mimic the behaviors of segmentation based oracle geometric relevance. The validity of its general rationale is verified in the specific context of fusion set selection for image segmentation. More specifically, we first present a unified formulation for surrogate relevance criteria and model the neighborhood relationship among atlases based on the oracle relevance knowledge. Surrogates are then trained to be small for geometrically relevant neighbors and large for irrelevant remotes to the given targets. The proposed surrogate learning framework is verified in corpus callosum segmentation. The learned surrogates demonstrate superiority in inferring the underlying oracle value and selecting relevant fusion set, compared to benchmark surrogates.

  14. Prediction of Fracture Initiation in Hot Compression of Burn-Resistant Ti-35V-15Cr-0.3Si-0.1C Alloy

    NASA Astrophysics Data System (ADS)

    Zhang, Saifei; Zeng, Weidong; Zhou, Dadi; Lai, Yunjin

    2015-11-01

    An important concern in hot working of metals is whether the desired deformation can be accomplished without fracture of the material. This paper builds a fracture prediction model to predict fracture initiation in hot compression of a burn-resistant beta-stabilized titanium alloy Ti-35V-15Cr-0.3Si-0.1C using a combined approach of upsetting experiments, theoretical failure criteria and finite element (FE) simulation techniques. A series of isothermal compression experiments on cylindrical specimens were conducted in temperature range of 900-1150 °C, strain rate of 0.01-10 s-1 first to obtain fracture samples and primary reduction data. Based on that, a comparison of eight commonly used theoretical failure criteria was made and Oh criterion was selected and coded into a subroutine. FE simulation of upsetting experiments on cylindrical specimens was then performed to determine the fracture threshold values of Oh criterion. By building a correlation between threshold values and the deforming parameters (temperature and strain rate, or Zener-Hollomon parameter), a new fracture prediction model based on Oh criterion was established. The new model shows an exponential decay relationship between threshold values and Zener-Hollomon parameter (Z), and the relative error of the model is less than 15%. This model was then applied successfully in the cogging of Ti-35V-15Cr-0.3Si-0.1C billet.

  15. An approximate spin design criterion for monoplanes, 1 May 1939

    NASA Technical Reports Server (NTRS)

    Seidman, O.; Donlan, C. J.

    1976-01-01

    An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.

  16. Orbit IMU alinement interpretation of onboard display data

    NASA Technical Reports Server (NTRS)

    Corson, R.

    1978-01-01

    The space shuttle inertial measurement unit (IMU) alinement algorith was examined to determine the most important alinement starpair selection criterion. Three crew displayed parameters were considered: (1) the results of the separation angle difference (SAD) check for each starpair; (2) the separation angle of each starpair; and (3) the age of each star measurement. It was determined that the SAD for each pair cannot be used to predict the IMu alinement accuracy. If the age of each star measurement is less than approximately 30 minutes, time is a relatively unimportant factor and the most important alinement pair selection criterion is the starpair separation angle. Therefore, when there are three available alinement starpairs and all measurements were taken within the last 30 minutes, the pair with the separation angle closest to 90 degrees should be selected for IMU alinement.

  17. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  18. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  19. Diel habitat selection of largemouth bass following woody structure installation in Table Rock Lake, Missouri

    USGS Publications Warehouse

    Harris, J.M.; Paukert, Craig P.; Bush, S.C.; Allen, M.J.; Siepker, Michael

    2018-01-01

    Largemouth bass Micropterus salmoides (Lacepède) use of installed habitat structure was evaluated in a large Midwestern USA reservoir to determine whether or not these structures were used in similar proportion to natural habitats. Seventy largemouth bass (>380 mm total length) were surgically implanted with radio transmitters and a subset was relocated monthly during day and night for one year. The top habitat selection models (based on Akaike's information criterion) suggest largemouth bass select 2–4 m depths during night and 4–7 m during day, whereas littoral structure selection was similar across diel periods. Largemouth bass selected boat docks at twice the rate of other structures. Installed woody structure was selected at similar rates to naturally occurring complex woody structure, whereas both were selected at a higher rate than simple woody structure. The results suggest the addition of woody structure may concentrate largemouth bass and mitigate the loss of woody habitat in a large reservoir.

  20. Using histograms to introduce randomization in the generation of ensembles of decision trees

    DOEpatents

    Kamath, Chandrika; Cantu-Paz, Erick; Littau, David

    2005-02-22

    A system for decision tree ensembles that includes a module to read the data, a module to create a histogram, a module to evaluate a potential split according to some criterion using the histogram, a module to select a split point randomly in an interval around the best split, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method includes the steps of reading the data; creating a histogram; evaluating a potential split according to some criterion using the histogram, selecting a split point randomly in an interval around the best split, splitting the data, and combining multiple decision trees in ensembles.

  1. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    PubMed

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  2. Comparison and continuous estimates of fecal coliform and Escherichia coli bacteria in selected Kansas streams, May 1999 through April 2002

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Ziegler, Andrew C.

    2003-01-01

    The sanitary quality of water and its use as a public-water supply and for recreational activities, such as swimming, wading, boating, and fishing, can be evaluated on the basis of fecal coliform and Escherichia coli (E. coli) bacteria densities. This report describes the overall sanitary quality of surface water in selected Kansas streams, the relation between fecal coliform and E. coli, the relation between turbidity and bacteria densities, and how continuous bacteria estimates can be used to evaluate the water-quality conditions in selected Kansas streams. Samples for fecal coliform and E. coli were collected at 28 surface-water sites in Kansas. Of the 318 samples collected, 18 percent exceeded the current Kansas Department of Health and Environment (KDHE) secondary contact recreational, single-sample criterion for fecal coliform (2,000 colonies per 100 milliliters of water). Of the 219 samples collected during the recreation months (April 1 through October 31), 21 percent exceeded the current (2003) KDHE single-sample fecal coliform criterion for secondary contact rec-reation (2,000 colonies per 100 milliliters of water) and 36 percent exceeded the U.S. Environmental Protection Agency (USEPA) recommended single-sample primary contact recreational criterion for E. coli (576 colonies per 100 milliliters of water). Comparisons of fecal coliform and E. coli criteria indicated that more than one-half of the streams sampled could exceed USEPA recommended E. coli criteria more frequently than the current KDHE fecal coliform criteria. In addition, the ratios of E. coli to fecal coliform (EC/FC) were smallest for sites with slightly saline water (specific conductance greater than 1,000 microsiemens per centimeter at 25 degrees Celsius), indicating that E. coli may not be a good indicator of sanitary quality for those streams. Enterococci bacteria may provide a more accurate assessment of the potential for swimming-related illnesses in these streams. Ratios of EC/FC and linear regression models were developed for estimating E. coli densities on the basis of measured fecal coliform densities for six individual and six groups of surface-water sites. Regression models developed for the six individual surface-water sites and six groups of sites explain at least 89 percent of the variability in E. coli densities. The EC/FC ratios and regression models are site specific and make it possible to convert historic fecal coliform bacteria data to estimated E. coli densities for the selected sites. The EC/FC ratios can be used to estimate E. coli for any range of historical fecal coliform densities, and in some cases with less error than the regression models. The basin- and statewide regression models explained at least 93 percent of the variance and best represent the sites where a majority of the data used to develop the models were collected (Kansas and Little Arkansas Basins). Comparison of the current (2003) KDHE geometric-mean primary contact criterion for fecal coliform bacteria of 200 col/100 mL to the 2002 USEPA recommended geometric-mean criterion of 126 col/100 mL for E. coli results in an EC/FC ratio of 0.63. The geometric-mean EC/FC ratio for all sites except Rattlesnake Creek (site 21) is 0.77, indicating that considerably more than 63 percent of the fecal coliform is E. coli. This potentially could lead to more exceedances of the recommended E. coli criterion, where the water now meets the current (2003) 200-col/100 mL fecal coliform criterion. In this report, turbidity was found to be a reliable estimator of bacteria densities. Regression models are provided for estimating fecal coliform and E. coli bacteria densities using continuous turbidity measurements. Prediction intervals also are provided to show the uncertainty associated with using the regression models. Eighty percent of all measured sample densities and individual turbidity-based estimates from the regression models were in agreement as exceedi

  3. Predictive Validity of an Empirical Approach for Selecting Promising Message Topics: A Randomized-Controlled Study

    PubMed Central

    Lee, Stella Juhyun; Brennan, Emily; Gibson, Laura Anne; Tan, Andy S. L.; Kybert-Momjian, Ani; Liu, Jiaying; Hornik, Robert

    2016-01-01

    Several message topic selection approaches propose that messages based on beliefs pretested and found to be more strongly associated with intentions will be more effective in changing population intentions and behaviors when used in a campaign. This study aimed to validate the underlying causal assumption of these approaches which rely on cross-sectional belief–intention associations. We experimentally tested whether messages addressing promising themes as identified by the above criterion were more persuasive than messages addressing less promising themes. Contrary to expectations, all messages increased intentions. Interestingly, mediation analyses showed that while messages deemed promising affected intentions through changes in targeted promising beliefs, messages deemed less promising also achieved persuasion by influencing nontargeted promising beliefs. Implications for message topic selection are discussed. PMID:27867218

  4. Neural classification of the selected family of butterflies

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Boniecki, P.; Piekarska-Boniecka, H.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.

    2017-07-01

    There have been noticed growing explorers' interest in drawing conclusions based on information of data coded in a graphic form. The neuronal identification of pictorial data, with special emphasis on both quantitative and qualitative analysis, is more frequently utilized to gain and deepen the empirical data knowledge. Extraction and then classification of selected picture features, such as color or surface structure, enables one to create computer tools in order to identify these objects presented as, for example, digital pictures. The work presents original computer system "Processing the image v.1.0" designed to digitalize pictures on the basis of color criterion. The system has been applied to generate a reference learning file for generating the Artificial Neural Network (ANN) to identify selected kinds of butterflies from the Papilionidae family.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albeverio, Sergio; Chen Kai; Fei Shaoming

    A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less

  6. Putting the Biological Species Concept to the Test: Using Mating Networks to Delimit Species

    PubMed Central

    Lagache, Lélia; Leger, Jean-Benoist; Daudin, Jean-Jacques; Petit, Rémy J.; Vacher, Corinne

    2013-01-01

    Although interfertility is the key criterion upon which Mayr’s biological species concept is based, it has never been applied directly to delimit species under natural conditions. Our study fills this gap. We used the interfertility criterion to delimit two closely related oak species in a forest stand by analyzing the network of natural mating events between individuals. The results reveal two groups of interfertile individuals connected by only few mating events. These two groups were largely congruent with those determined using other criteria (morphological similarity, genotypic similarity and individual relatedness). Our study, therefore, shows that the analysis of mating networks is an effective method to delimit species based on the interfertility criterion, provided that adequate network data can be assembled. Our study also shows that although species boundaries are highly congruent across methods of species delimitation, they are not exactly the same. Most of the differences stem from assignment of individuals to an intermediate category. The discrepancies between methods may reflect a biological reality. Indeed, the interfertility criterion is an environment-dependant criterion as species abundances typically affect rates of hybridization under natural conditions. Thus, the methods of species delimitation based on the interfertility criterion are expected to give results slightly different from those based on environment-independent criteria (such as the genotypic similarity criteria). However, whatever the criterion chosen, the challenge we face when delimiting species is to summarize continuous but non-uniform variations in biological diversity. The grade of membership model that we use in this study appears as an appropriate tool. PMID:23818990

  7. Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning

    ERIC Educational Resources Information Center

    Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus

    2008-01-01

    When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…

  8. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    PubMed Central

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  9. A method of selecting grid size to account for Hertz deformation in finite element analysis of spur gears

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Chao, C. H. C.

    1981-01-01

    A method of selecting grid size for the finite element analysis of gear tooth deflection is presented. The method is based on a finite element study of two cylinders in line contact, where the criterion for establishing element size was that there be agreement with the classical Hertzian solution for deflection. The results are applied to calculate deflection for the gear specimen used in the NASA spur gear test rig. Comparisons are made between the present results and the results of two other methods of calculation. The results have application in design of gear tooth profile modifications to reduce noise and dynamic loads.

  10. Upper Gastrointestinal Hemorrhage: Development of the Severity Score.

    PubMed

    Chaikitamnuaychok, Rangson; Patumanond, Jayanton

    2012-12-01

    Emergency endoscopy for every patient with upper gastrointestinal hemorrhage is not possible in many medical centers. Simple guidelines to select patients for emergency endoscopy are lacking. The aim of the present report is to develop a simple scoring system to classify upper gastrointestinal hemorrhage (UGIH) severity based on patient clinical profiles at the emergency departments. Retrospective data of patients with UGIH in a university affiliated hospital were analyzed. Patients were criterion-classified into 3 severity levels: mild, moderate and severe. Clinical and laboratory information were compared among the 3 groups. Significant parameters were selected as indicators of severity. Coefficients of significant multivariable parameters were transformed into item scores, which added up as individual severity scores. The scores were used to classify patients into 3 urgency levels: non-urgent, urgent and emergent groups. Score-classification and criterion-classification were compared. Significant parameters in the model were age ≥ 60 years, pulse rate ≥ 100/min, systolic blood pressure < 100 mmHg, hemoglobin < 10 g/dL, blood urea nitrogen ≥ 35 mg/dL, presence of cirrhosis and hepatic failure. The score ranged from 0 to 27, and classifying patients into 3 urgency groups: non-urgent (score < 4, n = 215, 21.2%), urgent (score 4 - 16, n = 677, 66.9%) and emergent (score > 16, n = 121, 11.9%). The score correctly classified 81.4% of the patients into their original (criterion-classified) severity groups. Under-estimation (7.5%) and over-estimation (11.1%) were clinically acceptable. Our UGIH severity scoring system classified patients into 3 urgency groups: non-urgent, urgent and emergent, with clinically acceptable small number of under- and over-estimations. Its discriminative ability and precision should be validated before adopting into clinical practice.

  11. Dust-obscured star formation and the contribution of galaxies escaping UV/optical color selections at z ~ 2

    NASA Astrophysics Data System (ADS)

    Riguccini, L.; Le Floc'h, E.; Ilbert, O.; Aussel, H.; Salvato, M.; Capak, P.; McCracken, H.; Kartaltepe, J.; Sanders, D.; Scoville, N.

    2011-10-01

    Context. A substantial amount of the stellar mass growth across cosmic time occurred within dust-enshrouded environments. So far, identification of complete samples of distant star-forming galaxies from the short wavelength range has been strongly biased by the effect of dust extinction. Nevertheless, the exact amount of star-forming activity that took place in high-redshift dusty galaxies but that has currently been missed by optical surveys has barely been explored. Aims: Our goal is to determine the number of luminous star-forming galaxies at 1.5 ≲ z ≲ 3 that are potentially missed by the traditional color selection techniques because of dust extinction. We also aim at quantifying the contribution of these sources to the IR luminosity and cosmic star formation density at high redshift. Methods: We based our work on a sample of 24 μm sources brighter than 80 μJy and taken from the Spitzer survey of the COSMOS field. Almost all of these sources have accurate photometric redshifts. We applied to this mid-IR selected sample the BzK and BM/BX criteria, as well as the selections of the IRAC peakers and the Optically-Faint IR-bright (OFIR) galaxies. We analyzed the fraction of sources identified with these techniques. We also computed 8 μm rest-frame luminosity from the 24 μm fluxes of our sources, and considering the relationships between L8 μm and LPaα and between L8 μm and LIR, we derived ρIR and then ρSFR for our MIPS sources. Results: The BzK criterion offers an almost complete (~90%) identification of the 24 μm sources at 1.4 < z < 2.5. In contrast, the BM/BX criterion misses 50% of the MIPS sources. We attribute this bias to the effect of extinction, which reddens the typical colors of galaxies. The contribution of these two selections to the IR luminosity density produced by all the sources brighter than 80 μJy are on the same order. Moreover the criterion based on the presence of a stellar bump in their spectra (IRAC peakers) misses up to 40% of the IR luminosity density, while only 25% of the IR luminosity density at z ~ 2 is produced by OFIR galaxies characterized by extreme mid-IR to optical flux ratios. Conclusions: Color selections of distant star-forming galaxies must be used with care given the substantial bias they can suffer. In particular, the effect of dust extinction strongly affects the completeness of identifications at the bright end of the bolometric luminosity function, which implies large and uncertain extrapolations to account for the contribution of dusty galaxies missed by these selections. In the context of forthcoming facilities that will operate at long wavelengths (e.g., JWST, ALMA, SAFARI, EVLA, SKA), this emphasizes the importance of minimizing the extinction biases when probing the activity of star formation in the early Universe.

  12. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  13. A selection criterion for patterns in reaction–diffusion systems

    PubMed Central

    2014-01-01

    Background Alan Turing’s work in Morphogenesis has received wide attention during the past 60 years. The central idea behind his theory is that two chemically interacting diffusible substances are able to generate stable spatial patterns, provided certain conditions are met. Ever since, extensive work on several kinds of pattern-generating reaction diffusion systems has been done. Nevertheless, prediction of specific patterns is far from being straightforward, and a great deal of interest in deciphering how to generate specific patterns under controlled conditions prevails. Results Techniques allowing one to predict what kind of spatial structure will emerge from reaction–diffusion systems remain unknown. In response to this need, we consider a generalized reaction diffusion system on a planar domain and provide an analytic criterion to determine whether spots or stripes will be formed. Our criterion is motivated by the existence of an associated energy function that allows bringing in the intuition provided by phase transitions phenomena. Conclusions Our criterion is proved rigorously in some situations, generalizing well-known results for the scalar equation where the pattern selection process can be understood in terms of a potential. In more complex settings it is investigated numerically. Our work constitutes a first step towards rigorous pattern prediction in arbitrary geometries/conditions. Advances in this direction are highly applicable to the efficient design of Biotechnology and Developmental Biology experiments, as well as in simplifying the analysis of morphogenetic models. PMID:24476200

  14. What Is True Halving in the Payoff Matrix of Game Theory?

    PubMed Central

    Hasegawa, Eisuke; Yoshimura, Jin

    2016-01-01

    In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player’s current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated. PMID:27487194

  15. What Is True Halving in the Payoff Matrix of Game Theory?

    PubMed

    Ito, Hiromu; Katsumata, Yuki; Hasegawa, Eisuke; Yoshimura, Jin

    2016-01-01

    In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player's current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated.

  16. A Web-Based Multidrug-Resistant Organisms Surveillance and Outbreak Detection System with Rule-Based Classification and Clustering

    PubMed Central

    Tseng, Yi-Ju; Wu, Jung-Hsuan; Ping, Xiao-Ou; Lin, Hui-Chi; Chen, Ying-Yu; Shang, Rung-Ji; Chen, Ming-Yuan; Lai, Feipei

    2012-01-01

    Background The emergence and spread of multidrug-resistant organisms (MDROs) are causing a global crisis. Combating antimicrobial resistance requires prevention of transmission of resistant organisms and improved use of antimicrobials. Objectives To develop a Web-based information system for automatic integration, analysis, and interpretation of the antimicrobial susceptibility of all clinical isolates that incorporates rule-based classification and cluster analysis of MDROs and implements control chart analysis to facilitate outbreak detection. Methods Electronic microbiological data from a 2200-bed teaching hospital in Taiwan were classified according to predefined criteria of MDROs. The numbers of organisms, patients, and incident patients in each MDRO pattern were presented graphically to describe spatial and time information in a Web-based user interface. Hierarchical clustering with 7 upper control limits (UCL) was used to detect suspicious outbreaks. The system’s performance in outbreak detection was evaluated based on vancomycin-resistant enterococcal outbreaks determined by a hospital-wide prospective active surveillance database compiled by infection control personnel. Results The optimal UCL for MDRO outbreak detection was the upper 90% confidence interval (CI) using germ criterion with clustering (area under ROC curve (AUC) 0.93, 95% CI 0.91 to 0.95), upper 85% CI using patient criterion (AUC 0.87, 95% CI 0.80 to 0.93), and one standard deviation using incident patient criterion (AUC 0.84, 95% CI 0.75 to 0.92). The performance indicators of each UCL were statistically significantly higher with clustering than those without clustering in germ criterion (P < .001), patient criterion (P = .04), and incident patient criterion (P < .001). Conclusion This system automatically identifies MDROs and accurately detects suspicious outbreaks of MDROs based on the antimicrobial susceptibility of all clinical isolates. PMID:23195868

  17. Selecting among competing models of electro-optic, infrared camera system range performance

    USGS Publications Warehouse

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  18. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    PubMed

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  19. Reconsidering vocational interests for personnel selection: the validity of an interest-based selection test in relation to job knowledge, job performance, and continuance intentions.

    PubMed

    Van Iddekinge, Chad H; Putka, Dan J; Campbell, John P

    2011-01-01

    Although vocational interests have a long history in vocational psychology, they have received extremely limited attention within the recent personnel selection literature. We reconsider some widely held beliefs concerning the (low) validity of interests for predicting criteria important to selection researchers, and we review theory and empirical evidence that challenge such beliefs. We then describe the development and validation of an interests-based selection measure. Results of a large validation study (N = 418) reveal that interests predicted a diverse set of criteria—including measures of job knowledge, job performance, and continuance intentions—with corrected, cross-validated Rs that ranged from .25 to .46 across the criteria (mean R = .31). Interests also provided incremental validity beyond measures of general cognitive aptitude and facets of the Big Five personality dimensions in relation to each criterion. Furthermore, with a couple exceptions, the interest scales were associated with small to medium subgroup differences, which in most cases favored women and racial minorities. Taken as a whole, these results appear to call into question the prevailing thought that vocational interests have limited usefulness for selection.

  20. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    NASA Technical Reports Server (NTRS)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  1. Recurrent personality dimensions in inclusive lexical studies: indications for a big six structure.

    PubMed

    Saucier, Gerard

    2009-10-01

    Previous evidence for both the Big Five and the alternative six-factor model has been drawn from lexical studies with relatively narrow selections of attributes. This study examined factors from previous lexical studies using a wider selection of attributes in 7 languages (Chinese, English, Filipino, Greek, Hebrew, Spanish, and Turkish) and found 6 recurrent factors, each with common conceptual content across most of the studies. The previous narrow-selection-based six-factor model outperformed the Big Five in capturing the content of the 6 recurrent wideband factors. Adjective markers of the 6 recurrent wideband factors showed substantial incremental prediction of important criterion variables over and above the Big Five. Correspondence between wideband 6 and narrowband 6 factors indicate they are variants of a "Big Six" model that is more general across variable-selection procedures and may be more general across languages and populations.

  2. Do birds of a feather flock together? The variable bases for African American, Asian American, and European American adolescents' selection of similar friends.

    PubMed

    Hamm, J V

    2000-03-01

    Variability in adolescent-friend similarity is documented in a diverse sample of African American, Asian American, and European American adolescents. Similarity was greatest for substance use, modest for academic orientations, and low for ethnic identity. Compared with Asian American and European American adolescents, African American adolescents chose friends who were less similar with respect to academic orientation or substance use but more similar with respect to ethnic identity. For all three ethnic groups, personal endorsement of the dimension in question and selection of cross-ethnic-group friends heightened similarity. Similarity was a relative rather than an absolute selection criterion: Adolescents did not choose friends with identical orientations. These findings call for a comprehensive theory of friendship selection sensitive to diversity in adolescents' experiences. Implications for peer influence and self-development are discussed.

  3. On computing Gröbner bases in rings of differential operators

    NASA Astrophysics Data System (ADS)

    Ma, Xiaodong; Sun, Yao; Wang, Dingkang

    2011-05-01

    Insa and Pauer presented a basic theory of Groebner basis for differential operators with coefficients in a commutative ring in 1998, and a criterion was proposed to determine if a set of differential operators is a Groebner basis. In this paper, we will give a new criterion such that Insa and Pauer's criterion could be concluded as a special case and one could compute the Groebner basis more efficiently by this new criterion.

  4. Estimation of genetic variance for macro- and micro-environmental sensitivity using double hierarchical generalized linear models

    PubMed Central

    2013-01-01

    Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014

  5. An efficient adaptive sampling strategy for global surrogate modeling with applications in multiphase flow simulation

    NASA Astrophysics Data System (ADS)

    Mo, S.; Lu, D.; Shi, X.; Zhang, G.; Ye, M.; Wu, J.

    2016-12-01

    Surrogate models have shown remarkable computational efficiency in hydrological simulations involving design space exploration, sensitivity analysis, uncertainty quantification, etc. The central task of constructing a global surrogate models is to achieve a prescribed approximation accuracy with as few original model executions as possible, which requires a good design strategy to optimize the distribution of data points in the parameter domains and an effective stopping criterion to automatically terminate the design process when desired approximation accuracy is achieved. This study proposes a novel adaptive sampling strategy, which starts from a small number of initial samples and adaptively selects additional samples by balancing the collection in unexplored regions and refinement in interesting areas. We define an efficient and effective evaluation metric basing on Taylor expansion to select the most promising potential samples from candidate points, and propose a robust stopping criterion basing on the approximation accuracy at new points to guarantee the achievement of desired accuracy. The numerical results of several benchmark analytical functions indicate that the proposed approach is more computationally efficient and robust than the widely used maximin distance design and two other well-known adaptive sampling strategies. The application to two complicated multiphase flow problems further demonstrates the efficiency and effectiveness of our method in constructing global surrogate models for high-dimensional and highly nonlinear problems. Acknowledgements: This work was financially supported by the National Nature Science Foundation of China grants No. 41030746 and 41172206.

  6. Accuracy of Area at Risk Quantification by Cardiac Magnetic Resonance According to the Myocardial Infarction Territory.

    PubMed

    Fernández-Friera, Leticia; García-Ruiz, José Manuel; García-Álvarez, Ana; Fernández-Jiménez, Rodrigo; Sánchez-González, Javier; Rossello, Xavier; Gómez-Talavera, Sandra; López-Martín, Gonzalo J; Pizarro, Gonzalo; Fuster, Valentín; Ibáñez, Borja

    2017-05-01

    Area at risk (AAR) quantification is important to evaluate the efficacy of cardioprotective therapies. However, postinfarction AAR assessment could be influenced by the infarcted coronary territory. Our aim was to determine the accuracy of T 2 -weighted short tau triple-inversion recovery (T 2 W-STIR) cardiac magnetic resonance (CMR) imaging for accurate AAR quantification in anterior, lateral, and inferior myocardial infarctions. Acute reperfused myocardial infarction was experimentally induced in 12 pigs, with 40-minute occlusion of the left anterior descending (n = 4), left circumflex (n = 4), and right coronary arteries (n = 4). Perfusion CMR was performed during selective intracoronary gadolinium injection at the coronary occlusion site (in vivo criterion standard) and, additionally, a 7-day CMR, including T 2 W-STIR sequences, was performed. Finally, all animals were sacrificed and underwent postmortem Evans blue staining (classic criterion standard). The concordance between the CMR-based criterion standard and T 2 W-STIR to quantify AAR was high for anterior and inferior infarctions (r = 0.73; P = .001; mean error = 0.50%; limits = -12.68%-13.68% and r = 0.87; P = .001; mean error = -1.5%; limits = -8.0%-5.8%, respectively). Conversely, the correlation for the circumflex territories was poor (r = 0.21, P = .37), showing a higher mean error and wider limits of agreement. A strong correlation between pathology and the CMR-based criterion standard was observed (r = 0.84, P < .001; mean error = 0.91%; limits = -7.55%-9.37%). T 2 W-STIR CMR sequences are accurate to determine the AAR for anterior and inferior infarctions; however, their accuracy for lateral infarctions is poor. These findings may have important implications for the design and interpretation of clinical trials evaluating the effectiveness of cardioprotective therapies. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  7. How the mind shapes action: Offline contexts modulate involuntary episodic retrieval.

    PubMed

    Frings, Christian; Koch, Iring; Moeller, Birte

    2017-11-01

    Involuntary retrieval of previous stimulus-response episodes is a centerpiece of many theories of priming, episodic binding, and action control. Typically it is assumed that by repeating a stimulus from trial n-1 to trial n, involuntary retrieval is triggered in a nearly automatic fashion, facilitating (or interfering with) the to-be-executed action. Here we argue that changes in the offline context weaken the involuntary retrieval of previous episodes (the offline context is defined to be the information presented before or after the focal stimulus). In four conditions differing in cue modality and target modality, retrieval was diminished if participants changed the target selection criterion (as indicated by a cue presented before the selection took place) while they still performed the same task. Thus, solely through changes in the offline context (cue or selection criterion), involuntary retrieval can be weakened in an effective way.

  8. Criterion-Referenced Testing and Measurement: A Review of Technical Issues and Developments.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    The success of objectives-based programs depends to a considerable extent on how effectively students and teachers assess mastery of objectives and make decisions for future instruction. While educators disagree on the usefulness of criterion-referenced tests the position taken in this monograph is that criterion-referenced tests are useful, and…

  9. An error criterion for determining sampling rates in closed-loop control systems

    NASA Technical Reports Server (NTRS)

    Brecher, S. M.

    1972-01-01

    The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.

  10. Resistor-logic demultiplexers for nanoelectronics based on constant-weight codes.

    PubMed

    Kuekes, Philip J; Robinett, Warren; Roth, Ron M; Seroussi, Gadiel; Snider, Gregory S; Stanley Williams, R

    2006-02-28

    The voltage margin of a resistor-logic demultiplexer can be improved significantly by basing its connection pattern on a constant-weight code. Each distinct code determines a unique demultiplexer, and therefore a large family of circuits is defined. We consider using these demultiplexers for building nanoscale crossbar memories, and determine the voltage margin of the memory system based on a particular code. We determine a purely code-theoretic criterion for selecting codes that will yield memories with large voltage margins, which is to minimize the ratio of the maximum to the minimum Hamming distance between distinct codewords. For the specific example of a 64 × 64 crossbar, we discuss what codes provide optimal performance for a memory.

  11. Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Volden, Thomas R.

    2012-01-01

    An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.

  12. Variable selection with stepwise and best subset approaches

    PubMed Central

    2016-01-01

    While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786

  13. A comparison of LBGs, DRGs, and BzK galaxies: their contribution to the stellar mass density in the GOODS-MUSIC sample

    NASA Astrophysics Data System (ADS)

    Grazian, A.; Salimbeni, S.; Pentericci, L.; Fontana, A.; Nonino, M.; Vanzella, E.; Cristiani, S.; de Santis, C.; Gallozzi, S.; Giallongo, E.; Santini, P.

    2007-04-01

    Context: The classification scheme for high redshift galaxies is complex at the present time, with simple colour-selection criteria (i.e. EROs, IEROs, LBGs, DRGs, BzKs), resulting in ill-defined properties for the stellar mass and star formation rate of these distant galaxies. Aims: The goal of this work is to investigate the properties of different classes of high-z galaxies, focusing in particular on the stellar masses of LBGs, DRGs, and BzKs, in order to derive their contribution to the total mass budget of the distant Universe. Methods: We used the GOODS-MUSIC catalog, containing ~3000 Ks-selected (~10 000 z-selected) galaxies with multi-wavelength coverage extending from the U band to the Spitzer 8~μm band, with spectroscopic or accurate photometric redshifts. We selected samples of BM/BX/LBGs, DRGs, and BzK galaxies to discuss the overlap and the limitations of these criteria, which can be overridden by a selection criterion based on physical parameters. We then measured the stellar masses of these galaxies and computed the stellar mass density (SMD) for the different samples up to redshift ≃4. Results: We show that the BzK-PE criterion is not optimal for selecting early type galaxies at the faint end. On the other hand, BzK-SF is highly contaminated by passively evolving galaxies at red z-Ks colours. We find that LBGs and DRGs contribute almost equally to the global SMD at z≥ 2 and, in general, that star-forming galaxies form a substantial fraction of the universal SMD. Passively evolving galaxies show a strong negative density evolution from redshift 2 to 3, indicating that we are witnessing the epoch of mass assembly of such objects. Finally we have indications that by pushing the selection to deeper magnitudes, the contribution of less massive DRGs could overtake that of LBGs. Deeper surveys, like the HUDF, are required to confirm this suggestion.

  14. Mimic expert judgement through automated procedure for selecting rainfall events responsible for shallow landslide: A statistical approach to validation

    NASA Astrophysics Data System (ADS)

    Giovanna, Vessia; Luca, Pisano; Carmela, Vennari; Mauro, Rossi; Mario, Parise

    2016-01-01

    This paper proposes an automated method for the selection of rainfall data (duration, D, and cumulated, E), responsible for shallow landslide initiation. The method mimics an expert person identifying D and E from rainfall records through a manual procedure whose rules are applied according to her/his judgement. The comparison between the two methods is based on 300 D-E pairs drawn from temporal rainfall data series recorded in a 30 days time-lag before the landslide occurrence. Statistical tests, employed on D and E samples considered both paired and independent values to verify whether they belong to the same population, show that the automated procedure is able to replicate the expert pairs drawn by the expert judgment. Furthermore, a criterion based on cumulated distribution functions (CDFs) is proposed to select the most related D-E pairs to the expert one among the 6 drawn from the coded procedure for tracing the empirical rainfall threshold line.

  15. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework.

    PubMed

    Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas

    2016-01-01

    Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.

  16. A two-phased fuzzy decision making procedure for IT supplier selection

    NASA Astrophysics Data System (ADS)

    Shohaimay, Fairuz; Ramli, Nazirah; Mohamed, Siti Rosiah; Mohd, Ainun Hafizah

    2013-09-01

    In many studies on fuzzy decision making, linguistic terms are usually represented by corresponding fixed triangular or trapezoidal fuzzy numbers. However, the fixed fuzzy numbers used in decision making process may not explain the actual respondents' opinions. Hence, a two-phased fuzzy decision making procedure is proposed. First, triangular fuzzy numbers were built based on respondents' opinions on the appropriate range (0-100) for each seven-scale linguistic terms. Then, the fuzzy numbers were integrated into fuzzy decision making model. The applicability of the proposed method is demonstrated in a case study of supplier selection in Information Technology (IT) department. The results produced via the developed fuzzy numbers were consistent with the results obtained using fixed fuzzy numbers. However, with different set of fuzzy numbers based on respondents, there is a difference in the ranking of suppliers based on criterion X1 (background of supplier). Hopefully the proposed model which incorporates fuzzy numbers based on respondents will provide a more significant meaning towards future decision making.

  17. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  18. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  19. On the dynamic nature of response criterion in recognition memory: effects of base rate, awareness, and feedback.

    PubMed

    Rhodes, Matthew G; Jacoby, Larry L

    2007-03-01

    The authors examined whether participants can shift their criterion for recognition decisions in response to the probability that an item was previously studied. Participants in 3 experiments were given recognition tests in which the probability that an item was studied was correlated with its location during the test. Results from all 3 experiments indicated that participants' response criteria were sensitive to the probability that an item was previously studied and that shifts in criterion were robust. In addition, awareness of the bases for criterion shifts and feedback on performance were key factors contributing to the observed shifts in decision criteria. These data suggest that decision processes can operate in a dynamic fashion, shifting from item to item.

  20. A nitric oxide concentration clamp.

    PubMed

    Zhelyaskov, V R; Godwin, D W

    1999-10-01

    We report a new method of generating nitric oxide (NO) that possesses several advantages for experimental use. This method consists of a photolysis chamber where NO is released by illuminating photolabile NO donors with light from a xenon lamp, in conjunction with feedback control. Control of the photolysis light was achieved by selectively gating light projected through a shutter before the light was launched into a light guide that conveyed the light to the photolysis chamber. By gating the light in proportion to a sensor that reported nearly instantaneous concentration from the photolysis chamber, a criterion NO concentration could be achieved, which could be easily adjusted to higher or lower criterion levels. To denote the similarity of this process with the electrophysiological process of voltage clamp, we term this process a concentration "clamp." This development enhances the use of the fiber-optic-based system for NO delivery and should enable the execution of experiments where the in situ concentration of NO is particularly critical, such as in biological preparations. Copyright 1999 Academic Press.

  1. Prediction of Burst Pressure in Multistage Tube Hydroforming of Aerospace Alloys.

    PubMed

    Saboori, M; Gholipour, J; Champliaud, H; Wanjara, P; Gakwaya, A; Savoie, J

    2016-08-01

    Bursting, an irreversible failure in tube hydroforming (THF), results mainly from the local plastic instabilities that occur when the biaxial stresses imparted during the process exceed the forming limit strains of the material. To predict the burst pressure, Oyan's and Brozzo's decoupled ductile fracture criteria (DFC) were implemented as user material models in a dynamic nonlinear commercial 3D finite-element (FE) software, ls-dyna. THF of a round to V-shape was selected as a generic representative of an aerospace component for the FE simulations and experimental trials. To validate the simulation results, THF experiments up to bursting were carried out using Inconel 718 (IN 718) tubes with a thickness of 0.9 mm to measure the internal pressures during the process. When comparing the experimental and simulation results, the burst pressure predicated based on Oyane's decoupled damage criterion was found to agree better with the measured data for IN 718 than Brozzo's fracture criterion.

  2. Color vision deficiency in Zahedan, Iran: lower than expected.

    PubMed

    Momeni-Moghaddam, Hamed; Ng, Jason S; Robabi, Hassan; Yaghubi, Farshid

    2014-11-01

    To estimate the prevalence of congenital red-green color vision defects in the elementary school students of Zahedan in 2012. In this cross-sectional study, 1000 students with a mean (±SD) age of 9.0 (±1.4) years were selected randomly from a large primary school population. Color vision was evaluated using the Ishihara pseudoisochromatic color plates (38-plate edition). A daylight fluorescent tube was used as an illuminant C equivalent (i.e., 860 lux, color rendering index greater than 92, and color temperature = 6500 K). Having more than three misreadings on the test was considered a failing criterion. Data were analyzed in SPSS version 17 software using χ2 tests. Nine students (0.9%) made more than three errors on the Ishihara test. Based on this criterion, the prevalence of red-green color vision deficiency in girls and boys was 0.2 and 1.6% (p = 0.02), respectively. The prevalence of red-green color vision deficiency was found to be significantly lower in Zahedan than comparable reports in the literature.

  3. Vortex identification from local properties of the vorticity field

    NASA Astrophysics Data System (ADS)

    Elsas, J. H.; Moriconi, L.

    2017-01-01

    A number of systematic procedures for the identification of vortices/coherent structures have been developed as a way to address their possible kinematical and dynamical roles in structural formulations of turbulence. It has been broadly acknowledged, however, that vortex detection algorithms, usually based on linear-algebraic properties of the velocity gradient tensor, can be plagued with severe shortcomings and may become, in practical terms, dependent on the choice of subjective threshold parameters in their implementations. In two-dimensions, a large class of standard vortex identification prescriptions turn out to be equivalent to the "swirling strength criterion" (λc i-criterion), which is critically revisited in this work. We classify the instances where the accuracy of the λc i-criterion is affected by nonlinear superposition effects and propose an alternative vortex detection scheme based on the local curvature properties of the vorticity graph (x ,y ,ω ) —the "vorticity curvature criterion" (λω-criterion)—which improves over the results obtained with the λc i-criterion in controlled Monte Carlo tests. A particularly problematic issue, given its importance in wall-bounded flows, is the eventual inadequacy of the λc i-criterion for many-vortex configurations in the presence of strong background shear. We show that the λω-criterion is able to cope with these cases as well, if a subtraction of the mean velocity field background is performed, in the spirit of the Reynolds decomposition procedure. A realistic comparative study for vortex identification is then carried out for a direct numerical simulation of a turbulent channel flow, including a three-dimensional extension of the λω-criterion. In contrast to the λc i-criterion, the λω-criterion indicates in a consistent way the existence of small scale isotropic turbulent fluctuations in the logarithmic layer, in consonance with long-standing assumptions commonly taken in turbulent boundary layer phenomenology.

  4. Blunt Criterion trauma model for head and chest injury risk assessment of cal. 380 R and cal. 22 long blank cartridge actuated gundog retrieval devices.

    PubMed

    Frank, Matthias; Bockholdt, Britta; Peters, Dieter; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Hinz, Peter

    2011-05-20

    Blunt ballistic impact trauma is a current research topic due to the widespread use of kinetic energy munitions in law enforcement. In the civilian setting, an automatic dummy launcher has recently been identified as source of blunt impact trauma. However, there is no data on the injury risk of conventional dummy launchers. It is the aim of this investigation to predict potential impact injury to the human head and chest on the basis of the Blunt Criterion which is an energy based blunt trauma model to assess vulnerability to blunt weapons, projectile impacts, and behind-armor-exposures. Based on experimentally investigated kinetic parameters, the injury risk of two commercially available gundog retrieval devices (Waidwerk Telebock, Germany; Turner Richards, United Kingdom) was assessed using the Blunt Criterion trauma model for blunt ballistic impact trauma to the head and chest. Assessing chest impact, the Blunt Criterion values for both shooting devices were higher than the critical Blunt Criterion value of 0.37, which represents a 50% risk of sustaining a thoracic skeletal injury of AIS 2 (moderate injury) or AIS 3 (serious injury). The maximum Blunt Criterion value (1.106) was higher than the Blunt Criterion value corresponding to AIS 4 (severe injury). With regard to the impact injury risk to the head, both devices surpass by far the critical Blunt Criterion value of 1.61, which represents a 50% risk of skull fracture. Highest Blunt Criterion values were measured for the Turner Richards Launcher (2.884) corresponding to a risk of skull fracture of higher than 80%. Even though the classification as non-guns by legal authorities might implicate harmlessness, the Blunt Criterion trauma model illustrates the hazardous potential of these shooting devices. The Blunt Criterion trauma model links the laboratory findings to the impact injury patterns of the head and chest that might be expected. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Harmonic wavelet packet transform for on-line system health diagnosis

    NASA Astrophysics Data System (ADS)

    Yan, Ruqiang; Gao, Robert X.

    2004-07-01

    This paper presents a new approach to on-line health diagnosis of mechanical systems, based on the wavelet packet transform. Specifically, signals acquired from vibration sensors are decomposed into sub-bands by means of the discrete harmonic wavelet packet transform (DHWPT). Based on the Fisher linear discriminant criterion, features in the selected sub-bands are then used as inputs to three classifiers (Nearest Neighbor rule-based and two Neural Network-based), for system health condition assessment. Experimental results have confirmed that, comparing to the conventional approach where statistical parameters from raw signals are used, the presented approach enabled higher signal-to-noise ratio for more effective and intelligent use of the sensory information, thus leading to more accurate system health diagnosis.

  6. VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA

    PubMed Central

    Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu

    2009-01-01

    We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190

  7. Inviscid Flow Computations of Several Aeroshell Configurations for a '07 Mars Lander

    NASA Technical Reports Server (NTRS)

    Prabhu, Ramadas K.

    2001-01-01

    This report documents the results of an inviscid computational study conducted on several candidate aeroshell configurations for a proposed '07 Mars lander. Eleven different configurations were considered, and the aerodynamic characteristics of each of these were computed for a Mach number of 23.7 at 10, 15, and 20 degree angles of attack. The unstructured grid software FELISA with the equilibrium Mars gas option was used for these computations. The pitching moment characteristics and the lift-to-drag ratios at trim angle of attack of each of these configurations were examined to make a selection. The criterion for selection was that the configuration should be longitudinally stable, and should trim at an angle of attack where the L/D is -0.25. Based on the present study, two configurations were selected for further study

  8. Evaluation of the best fit distribution for partial duration series of daily rainfall in Madinah, western Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.

    2014-09-01

    Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.

  9. Swedish PE Teachers Struggle with Assessment in a Criterion-Referenced Grading System

    ERIC Educational Resources Information Center

    Svennberg, Lena; Meckbach, Jane; Redelius, Karin

    2018-01-01

    In the field of education, the international trend is to turn to criterion-referenced grading in the hope of achieving accountable and consistent grades. Despite a national criterion-referenced grading system emphasising knowledge as the only base for grading, Swedish physical education (PE) grades have been shown to value non-knowledge factors,…

  10. Research Supervisors' Perceptions of Effective Practices for Selecting Successful Research Candidates

    ERIC Educational Resources Information Center

    Blunt, R. J. S.

    2009-01-01

    This investigation elicited the perceptions of thirteen of the most successful research supervisors from one university, with a view to identifying their approaches to selecting research candidates. The supervisors were identified by the university's research office using the single criterion of having the largest number of completed research…

  11. Plasma polar lipid profiles of channel catfish with different growth rates

    USDA-ARS?s Scientific Manuscript database

    Increased growth in channel catfish is an economically important trait and has been used as a criterion for the selection and development of brood fish. Selection of channel catfish toward increased growth usually results in the accumulation of large amounts of fats in their abdomen rather than incr...

  12. App Development Paradigms for Instructional Developers

    ERIC Educational Resources Information Center

    Luterbach, Kenneth J.; Hubbell, Kenneth R.

    2015-01-01

    To create instructional apps for desktop, laptop and mobile devices, developers must select a development tool. Tool selection is critical and complicated by the large number and variety of app development tools. One important criterion to consider is the type of development environment, which may primarily be visual or symbolic. Those distinct…

  13. Job task characteristics of Australian emergency services volunteers during search and rescue operations.

    PubMed

    Silk, Aaron; Lenton, Gavin; Savage, Robbie; Aisbett, Brad

    2018-02-01

    Search and rescue operations are necessary in locating, assisting and recovering individuals lost or in distress. In Australia, land-based search and rescue roles require a range of physically demanding tasks undertaken in dynamic and challenging environments. The aim of the current research was to identify and characterise the physically demanding tasks inherent to search and rescue operation personnel within Australia. These aims were met through a subjective job task analysis approach. In total, 11 criterion tasks were identified by personnel. These tasks were the most physically demanding, frequently occurring and operationally important tasks to these specialist roles. Muscular strength was the dominant fitness component for 7 of the 11 tasks. In addition to the discrete criterion tasks, an operational scenario was established. With the tasks and operational scenario identified, objective task analysis procedures can be undertaken so that practitioners can implement evidence-based strategies, such as physical selection procedures and task-based physical training programs, commensurate with the physical demands of search and rescue job roles. Practitioner Summary: The identification of physically demanding tasks amongst specialist emergency service roles predicates health and safety strategies which can be incorporated into organisations. Knowledge of physical task parameters allows employers to mitigate injury risk through the implementation of strategies modelled on the precise physical demands of the role.

  14. VizieR Online Data Catalog: Properties of giant arcs behind CLASH clusters (Xu+, 2016)

    NASA Astrophysics Data System (ADS)

    Xu, B.; Postman, M.; Meneghetti, M.; Seitz, S.; Zitrin, A.; Merten, J.; Maoz, D.; Frye, B.; Umetsu, K.; Zheng, W.; Bradley, L.; Vega, J.; Koekemoer, A.

    2018-01-01

    Giant arcs are found in the CLASH images and in simulated images that mimic the CLASH data, using an efficient automated arc-finding algorithm whose selection function has been carefully quantified. CLASH is a 524-orbit multicycle treasury program that targeted 25 massive clusters with 0.18=6.5 in 20 X-ray-selected CLASH clusters. After applying our minimum arc length criterion l>=6", the arc count drops to 81 giant arcs selected from the 20 X-ray-selected CLASH clusters. (2 data files).

  15. Development and Field Test of Task-Based MOS (Military Occupational Specialties)-Specific Criterion Measures. Project A. Improving the Selection, Classification, and Utilization of Army Enlisted Personnel

    DTIC Science & Technology

    1986-07-01

    a free-response format can be used to test knowledge of a task sequence, but such formats demand more of the soldier’s literacy skills and are more...correlations (over .40) with strong knowledge counterparts, or that overlapped with similar skilled psychomotor hands-on tests. However, if dropping...tested on Skill Level 1 soldiers and noncommissioned officers. Field tests were conducted among 114-178 soldiers per MOS. Results were used to revise the

  16. Reliability analysis of structural ceramics subjected to biaxial flexure

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1991-01-01

    The reliability of alumina disks subjected to biaxial flexure is predicted on the basis of statistical fracture theory using a critical strain energy release rate fracture criterion. Results on a sintered silicon nitride are consistent with reliability predictions based on pore-initiated penny-shaped cracks with preferred orientation normal to the maximum principal stress. Assumptions with regard to flaw types and their orientations in each ceramic can be justified by fractography. It is shown that there are no universal guidelines for selecting fracture criteria or assuming flaw orientations in reliability analyses.

  17. Mixture Rasch model for guessing group identification

    NASA Astrophysics Data System (ADS)

    Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling

    2013-04-01

    Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.

  18. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  19. Method Development for Clinical Comprehensive Evaluation of Pediatric Drugs Based on Multi-Criteria Decision Analysis: Application to Inhaled Corticosteroids for Children with Asthma.

    PubMed

    Yu, Yuncui; Jia, Lulu; Meng, Yao; Hu, Lihua; Liu, Yiwei; Nie, Xiaolu; Zhang, Meng; Zhang, Xuan; Han, Sheng; Peng, Xiaoxia; Wang, Xiaoling

    2018-04-01

    Establishing a comprehensive clinical evaluation system is critical in enacting national drug policy and promoting rational drug use. In China, the 'Clinical Comprehensive Evaluation System for Pediatric Drugs' (CCES-P) project, which aims to compare drugs based on clinical efficacy and cost effectiveness to help decision makers, was recently proposed; therefore, a systematic and objective method is required to guide the process. An evidence-based multi-criteria decision analysis model that involved an analytic hierarchy process (AHP) was developed, consisting of nine steps: (1) select the drugs to be reviewed; (2) establish the evaluation criterion system; (3) determine the criterion weight based on the AHP; (4) construct the evidence body for each drug under evaluation; (5) select comparative measures and calculate the original utility score; (6) place a common utility scale and calculate the standardized utility score; (7) calculate the comprehensive utility score; (8) rank the drugs; and (9) perform a sensitivity analysis. The model was applied to the evaluation of three different inhaled corticosteroids (ICSs) used for asthma management in children (a total of 16 drugs with different dosage forms and strengths or different manufacturers). By applying the drug analysis model, the 16 ICSs under review were successfully scored and evaluated. Budesonide suspension for inhalation (drug ID number: 7) ranked the highest, with comprehensive utility score of 80.23, followed by fluticasone propionate inhaled aerosol (drug ID number: 16), with a score of 79.59, and budesonide inhalation powder (drug ID number: 6), with a score of 78.98. In the sensitivity analysis, the ranking of the top five and lowest five drugs remains unchanged, suggesting this model is generally robust. An evidence-based drug evaluation model based on AHP was successfully developed. The model incorporates sufficient utility and flexibility for aiding the decision-making process, and can be a useful tool for the CCES-P.

  20. 3-D Mixed Mode Delamination Fracture Criteria - An Experimentalist's Perspective

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2006-01-01

    Many delamination failure criteria based on fracture toughness have been suggested over the past few decades, but most only covered the region containing mode I and mode II components of loading because that is where toughness data existed. With new analysis tools, more 3D analyses are being conducted that capture a mode III component of loading. This has increased the need for a fracture criterion that incorporates mode III loading. The introduction of a pure mode III fracture toughness test has also produced data on which to base a full 3D fracture criterion. In this paper, a new framework for visualizing 3D fracture criteria is introduced. The common 2D power law fracture criterion was evaluated to produce unexpected predictions with the introduction of mode III and did not perform well in the critical high mode I region. Another 2D criterion that has been shown to model a wide range of materials well was used as the basis for a new 3D criterion. The new criterion is based on assumptions that the relationship between mode I and mode III toughness is similar to the relation between mode I and mode II and that a linear interpolation can be used between mode II and mode III. Until mixed-mode data exists with a mode III component of loading, 3D fracture criteria cannot be properly evaluated, but these assumptions seem reasonable.

  1. 40 CFR 35.6555 - Competition.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... engineering (A/E) services, the recipient may use geographic location as a selection criterion, provided that... bids or proposals. The recipient must publish the public notice in professional journals, newspapers...

  2. 40 CFR 35.6555 - Competition.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... engineering (A/E) services, the recipient may use geographic location as a selection criterion, provided that... bids or proposals. The recipient must publish the public notice in professional journals, newspapers...

  3. 40 CFR 35.6555 - Competition.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... engineering (A/E) services, the recipient may use geographic location as a selection criterion, provided that... bids or proposals. The recipient must publish the public notice in professional journals, newspapers...

  4. Copula based flexible modeling of associations between clustered event times.

    PubMed

    Geerdens, Candida; Claeskens, Gerda; Janssen, Paul

    2016-07-01

    Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.

  5. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  6. Study on the criterion to determine the bottom deployment modes of a coilable mast

    NASA Astrophysics Data System (ADS)

    Ma, Haibo; Huang, Hai; Han, Jianbin; Zhang, Wei; Wang, Xinsheng

    2017-12-01

    A practical design criterion that allows the coilable mast bottom to deploy in local coil mode was proposed. The criterion was defined with initial bottom helical angle and obtained by bottom deformation analyses. Discretizing the longerons into short rods, analyses were conducted based on the cylinder assumption and Kirchhoff's kinetic analogy theory. Then, iterative calculations aiming at the bottom four rods were carried out. A critical bottom helical angle was obtained while the angle changing rate equaled to zero. The critical value was defined as a criterion for judgement of bottom deployment mode. Subsequently, micro-gravity deployment tests were carried out and bottom deployment simulations based on finite element method were developed. Through comparisons of bottom helical angles in critical state, the proposed criterion was evaluated and modified, that is, an initial bottom helical angle less than critical value with a design margin of -13.7% could ensure the mast bottom deploying in local coil mode, and further determine a successful local coil deployment of entire coilable mast.

  7. Cold formability prediction by the modified maximum force criterion with a non-associated Hill48 model accounting for anisotropic hardening

    NASA Astrophysics Data System (ADS)

    Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.

    2016-08-01

    Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.

  8. A Criterion-Referenced Viewpoint on Standards/Cutscores in Language Testing.

    ERIC Educational Resources Information Center

    Davidson, Fred; Lynch, Brian K.

    "Standard" is distinguished from "criterion" as it is used in criterion-referenced testing. The former is argued to refer to the real-world cutpoint at which a decision is made based on a test's result (e.g., exemption from a special training program). The latter is a skill or set of skills to which a test is referenced.…

  9. Criterion-Related Validity of the TOEFL iBT Listening Section. TOEFL iBT Research Report. RR-09-02

    ERIC Educational Resources Information Center

    Sawaki, Yasuyo; Nissan, Susan

    2009-01-01

    The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…

  10. Review of the Functions of Archimedes’ Spiral Metallic Nanostructures

    PubMed Central

    Li, Zixiang; Zhang, Jingran; Guo, Kai; Shen, Fei; Zhou, Qingfeng; Zhou, Hongping

    2017-01-01

    Here, we have reviewed some typical plasmonic structures based on Archimedes’ spiral (AS) architectures, which can produce polarization-sensitive focusing phenomenon and generate plasmonic vortices (PVs) carrying controllable orbital angular momentum (OAM) because of the relation between the incident polarized states and the chiralities of the spiral structures. These features can be used to analyze different circular polarization states, which has been one of the rapidly developing researching topics in nanophotonics in recent years. Many investigations demonstrate that the multifunctional spiral-based plasmonic structures are excellent choices for chiral selection and generating the transmitted field with well-defined OAM. The circular polarization extinction ratio, as an evaluation criterion for the polarization selectivity of a designed structure, could be effectively improved by properly modulating the parameters of spiral structures. Such functional spiral plasmonic nanostructures are promising for applications in analyzing circular polarization light, full Stokes vector polarimetric sensors, near-field imaging, and so on. PMID:29165382

  11. An improved partial least-squares regression method for Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Momenpour Tehran Monfared, Ali; Anis, Hanan

    2017-10-01

    It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.

  12. Analyzing Genome-Wide Association Studies with an FDR Controlling Modification of the Bayesian Information Criterion

    PubMed Central

    Dolejsi, Erich; Bodenstorfer, Bernhard; Frommlet, Florian

    2014-01-01

    The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2) which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and d-GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, where specifically for more complex models MOSGWA is more powerful with only a slight increase in Type I error. On the other hand according to our simulations GWASelect does not at all control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium and compare the findings of the different procedures, where MOSGWA detects for complex diseases a number of interesting SNPs which are not found by other methods. PMID:25061809

  13. Can scrotal circumference-based selection discard bulls with good productive and reproductive potential?

    PubMed Central

    Villadiego, Faider Alberto Castaño; Camilo, Breno Soares; León, Victor Gomez; Peixoto, Thiago; Díaz, Edgar; Okano, Denise; Maitan, Paula; Lima, Daniel; Guimarães, Simone Facioni; Siqueira, Jeanne Broch; Pinho, Rogério

    2018-01-01

    Nonlinear mixed models were used to describe longitudinal scrotal circumference (SC) measurements of Nellore bulls. Models comparisons were based on Akaike’s information criterion, Bayesian information criterion, error sum of squares, adjusted R2 and percentage of convergence. Sequentially, the best model was used to compare the SC growth curve in bulls divergently classified according to SC at 18–21 months of age. For this, bulls were classified into five groups: SC < 28cm; 28cm ≤ SC < 30cm, 30cm ≤ SC < 32cm, 32cm ≤ SC < 34cm and SC ≥ 34cm. Michaelis-Menten model showed the best fit according to the mentioned criteria. In this model, β1 is the asymptotic SC value and β2 represents the time to half-final growth and may be related to sexual precocity. Parameters of the individual estimated growth curves were used to create a new dataset to evaluate the effect of the classification, farms, and year of birth on β1 and β2 parameters. Bulls of the largest SC group presented a larger predicted SC along all analyzed periods; nevertheless, smaller SC group showed predicted SC similar to intermediate SC groups (28cm ≤ SC < 32cm), around 1200 days of age. In this context, bulls classified as improper for reproduction at 18–21 months old can reach a similar condition to those considered as good condition. In terms of classification at 18–21 months, asymptotic SC was similar among groups, farms and years; however, β2 differed among groups indicating that differences in growth curves are related to sexual precocity. In summary, it seems that selection based on SC at too early ages may lead to discard bulls with suitable reproductive potential. PMID:29494597

  14. The use of LANDSAT digital data to detect and monitor vegetation water deficiencies. [South Dakota

    NASA Technical Reports Server (NTRS)

    Thompson, D. R.; Wehmanen, O. A.

    1977-01-01

    A technique devised using a vector transformation of LANDSAT digital data to indicate when vegetation is undergoing moisture stress is described. A relation established between the remote sensing-based criterion (the Green Index Number) and a ground-based criterion (Crop Moisture Index) is discussed.

  15. Event selection services in ATLAS

    NASA Astrophysics Data System (ADS)

    Cranshaw, J.; Cuhadar-Donszelmann, T.; Gallas, E.; Hrivnac, J.; Kenyon, M.; McGlone, H.; Malon, D.; Mambelli, M.; Nowak, M.; Viegas, F.; Vinek, E.; Zhang, Q.

    2010-04-01

    ATLAS has developed and deployed event-level selection services based upon event metadata records ("TAGS") and supporting file and database technology. These services allow physicists to extract events that satisfy their selection predicates from any stage of data processing and use them as input to later analyses. One component of these services is a web-based Event-Level Selection Service Interface (ELSSI). ELSSI supports event selection by integrating run-level metadata, luminosity-block-level metadata (e.g., detector status and quality information), and event-by-event information (e.g., triggers passed and physics content). The list of events that survive after some selection criterion is returned in a form that can be used directly as input to local or distributed analysis; indeed, it is possible to submit a skimming job directly from the ELSSI interface using grid proxy credential delegation. ELSSI allows physicists to explore ATLAS event metadata as a means to understand, qualitatively and quantitatively, the distributional characteristics of ATLAS data. In fact, the ELSSI service provides an easy interface to see the highest missing ET events or the events with the most leptons, to count how many events passed a given set of triggers, or to find events that failed a given trigger but nonetheless look relevant to an analysis based upon the results of offline reconstruction, and more. This work provides an overview of ATLAS event-level selection services, with an emphasis upon the interactive Event-Level Selection Service Interface.

  16. [Active surveillance for prostate cancer: usefulness of endorectal MR at 1.5 Tesla with pelvic phased array coil in detecting significant tumors].

    PubMed

    Luyckx, F; Hallouin, P; Barré, C; Aillet, G; Chauveau, P; Hétet, J-F; Bouchot, O; Rigaud, J

    2011-02-01

    To describe and assess MRI signs of significant tumor in a series of patients who all underwent radical prostatectomy and also fulfilled criteria to choose active surveillance according to French "SurAcaP" protocol. The clinical reports of 681 consecutive patients operated on for prostate cancer between 2002 and 2007 were reviewed retrospectively. All patients had endorectal MR (1.5 Tesla) with pelvic phased array coil. (1.5 T erMR PPA). Sixty-one patients (8.9%) fulfilled "SurAcaP" protocol criteria. Preoperative data (MR+core biopsy) were assessed by comparison to whole-mount step section pathology. 85.3% of the 61 patients entering SurAcaP protocol had significant tumor at pathology. (Non Organ Confined Disease (Non OCD)=8.2%, Gleason sum score>6=39.2%). A new exclusion criterion has been assessed: T3MRI±NPS>1 as a predictor tool of significant tumor. ("T3MRI±NPS>1"=Non OCD at MR±number of positive sextants involved in tumor at MR and/or Core Biopsy > to 1). Sensitivity, specificity, PPV, NPV of the criterion "T3MRI±NPS>1" in predicting significant tumor were, respectively: 77%, 33%, 86%, 20%. Adding this criterion to other criteria of the "SurAcaP" protocol could allow the exclusion of all Non OCD, and a decrease in Gleason sum Score>6 rates (20%). Endorectal MR at 1.5 Tesla with pelvic-phased array coil should be considered when selecting patients for active surveillance in the management of prostate cancer. A criterion based upon MR and core biopsy findings, called "T3MR±NSP>1" may represent an exclusion citeria due to its ability to predict significant tumor. Copyright © 2010 Elsevier Masson SAS. All rights reserved.

  17. An algorithm to identify rheumatoid arthritis in primary care: a Clinical Practice Research Datalink study

    PubMed Central

    Muller, Sara; Hider, Samantha L; Raza, Karim; Stack, Rebecca J; Hayward, Richard A; Mallen, Christian D

    2015-01-01

    Objective Rheumatoid arthritis (RA) is a multisystem, inflammatory disorder associated with increased levels of morbidity and mortality. While much research into the condition is conducted in the secondary care setting, routinely collected primary care databases provide an important source of research data. This study aimed to update an algorithm to define RA that was previously developed and validated in the General Practice Research Database (GPRD). Methods The original algorithm consisted of two criteria. Individuals meeting at least one were considered to have RA. Criterion 1: ≥1 RA Read code and a disease modifying antirheumatic drug (DMARD) without an alternative indication. Criterion 2: ≥2 RA Read codes, with at least one ‘strong’ code and no alternative diagnoses. Lists of codes for consultations and prescriptions were obtained from the authors of the original algorithm where these were available, or compiled based on the original description and clinical knowledge. 4161 people with a first Read code for RA between 1 January 2010 and 31 December 2012 were selected from the Clinical Practice Research Datalink (CPRD, successor to the GPRD), and the criteria applied. Results Code lists were updated for the introduction of new Read codes and biological DMARDs. 3577/4161 (86%) of people met the updated algorithm for RA, compared to 61% in the original development study. 62.8% of people fulfilled both Criterion 1 and Criterion 2. Conclusions Those wishing to define RA in the CPRD, should consider using this updated algorithm, rather than a single RA code, if they wish to identify only those who are most likely to have RA. PMID:26700281

  18. Are Centers for Disease Control and Prevention Guidelines for Preexposure Prophylaxis Specific Enough? Formulation of a Personalized HIV Risk Score for Pre-Exposure Prophylaxis Initiation.

    PubMed

    Beymer, Matthew R; Weiss, Robert E; Sugar, Catherine A; Bourque, Linda B; Gee, Gilbert C; Morisky, Donald E; Shu, Suzanne B; Javanbakht, Marjan; Bolan, Robert K

    2017-01-01

    Preexposure prophylaxis (PrEP) has emerged as a human immunodeficiency virus (HIV) prevention tool for populations at highest risk for HIV infection. Current US Centers for Disease Control and Prevention (CDC) guidelines for identifying PrEP candidates may not be specific enough to identify gay, bisexual, and other men who have sex with men (MSM) at the highest risk for HIV infection. We created an HIV risk score for HIV-negative MSM based on Syndemics Theory to develop a more targeted criterion for assessing PrEP candidacy. Behavioral risk assessment and HIV testing data were analyzed for HIV-negative MSM attending the Los Angeles LGBT Center between January 2009 and June 2014 (n = 9481). Syndemics Theory informed the selection of variables for a multivariable Cox proportional hazards model. Estimated coefficients were summed to create an HIV risk score, and model fit was compared between our model and CDC guidelines using the Akaike Information Criterion and Bayesian Information Criterion. Approximately 51% of MSM were above a cutpoint that we chose as an illustrative risk score to qualify for PrEP, identifying 75% of all seroconverting MSM. Our model demonstrated a better overall fit when compared with the CDC guidelines (Akaike Information Criterion Difference = 68) in addition to identifying a greater proportion of HIV infections. Current CDC PrEP guidelines should be expanded to incorporate substance use, partner-level, and other Syndemic variables that have been shown to contribute to HIV acquisition. Deployment of such personalized algorithms may better hone PrEP criteria and allow providers and their patients to make a more informed decision prior to PrEP use.

  19. Thermo-solutal growth of an anisotropic dendrite with six-fold symmetry

    NASA Astrophysics Data System (ADS)

    Alexandrov, D. V.; Galenko, P. K.

    2018-03-01

    A stable growth of dendritic crystal with the six-fold crystalline anisotropy is analyzed in a binary nonisothermal mixture. A selection criterion representing a relationship between the dendrite tip velocity and its tip diameter is derived on the basis of morphological stability analysis and solvability theory. A complete set of nonlinear equations, consisting of the selection criterion and undercooling balance condition, which determines implicit dependencies of the dendrite tip velocity and tip diameter as functions of the total undercooling, is formulated. Exact analytical solutions of these nonlinear equations are found in a parametric form. Asymptotic solutions describing the crystal growth at small Péclet numbers are determined. Theoretical predictions are compared with experimental data obtained for ice dendrites growing in binary water-ethylenglycol solutions as well as in pure water.

  20. Locomotor activity, core body temperature, and circadian rhythms in mice selected for high or low heat loss.

    PubMed

    Mousel, M R; Stroup, W W; Nielsen, M K

    2001-04-01

    Daily locomotor activity, core body temperature, and their circadian rhythms were measured in lines of mice selected for high (MH) or low (ML) heat loss and unselected controls (MC). Lines were created by selecting for 16 generations in each of three replicates. Collection of locomotor activity and core temperature data spanned Generations 20 and 21 for a total of 352 mice. Physical activity and core body temperature data were accumulated using implanted transmitters and continuous automated collection. Measurement for each animal was for 3 d. Activity was recorded for each half hour and then averaged for the day; temperature was averaged daily; circadian rhythm was expressed in 12-h (light vs dark) or 6-h periods as well as by fitting cyclic models. Activity means were transformed to log base 2 to lessen heterogeneity of variance within lines. Heat loss for a 15-h period beginning at 1630 and feed intake for 7 d were measured on 74 additional mice in order to estimate the relationship between locomotor activity and heat loss or feed intake. Selection lines were different (P < 0.01) for both locomotor activity and core body temperature. Differences were due to selection (MH-ML, P < 0.01), and there was no evidence of asymmetry of response (P > 0.38). Retransformed from log base 2 to the scale of measurement, mean activity counts were 308, 210, and 150 for MH, MC, and ML, respectively. Mean core temperatures were 37.2, 36.9, and 36.7 degrees C for MH, MC, and ML (P < 0.01), respectively. Females had greater physical activity (P < 0.01) and body temperature (P < 0.01) than males. There was no evidence of a sex x selection criterion interaction for either activity or temperature (P > 0.20). Overall phenotypic correlation between body temperature and log base 2 activity was 0.43 (P < 0.01). Periods during the day were different for both 12- and 6-h analyses (P < 0.01), but there were no period x selection criterion interactions (P > 0.1) for physical activity or body temperature. More sensitive cyclic models revealed significant (P < 0.01) 24-, 12-, 8-, and 6-h cycles that differed (P < 0.01) among lines. Estimated differences between MH and ML mice in feed intake and heat loss due to locomotor activity were 36 and 11.5%, respectively. Variation in activity thus contributed to variation in feed intake.

  1. How good is crude MDL for solving the bias-variance dilemma? An empirical investigation based on Bayesian networks.

    PubMed

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.

  2. How Good Is Crude MDL for Solving the Bias-Variance Dilemma? An Empirical Investigation Based on Bayesian Networks

    PubMed Central

    Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli

    2014-01-01

    The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204

  3. Assessment of the Validity of the Research Diagnostic Criteria for Temporomandibular Disorders: Overview and Methodology

    PubMed Central

    Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.

    2011-01-01

    AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028

  4. Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.

    PubMed

    Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu

    2017-05-01

    The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.

  5. Selection Methods for Undergraduate Admissions in Australia. Does the Australian Predominate Entry Scheme the Australian Tertiary Admissions Rank (ATAR) Have a Future?

    ERIC Educational Resources Information Center

    Blyth, Kathryn

    2014-01-01

    This article considers the Australian entry score system, the Australian Tertiary Admissions Rank (ATAR), and its usage as a selection mechanism for undergraduate places in Australian higher education institutions and asks whether its role as the main selection criterion will continue with the introduction of demand driven funding in 2012.…

  6. 48 CFR 36.602-1 - Selection criteria.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; provided, that application of this criterion leaves an appropriate number of qualified firms, given the...) Unique situations exist involving prestige projects, such as the design of memorials and structures of...

  7. 48 CFR 36.602-1 - Selection criteria.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; provided, that application of this criterion leaves an appropriate number of qualified firms, given the...) Unique situations exist involving prestige projects, such as the design of memorials and structures of...

  8. 48 CFR 36.602-1 - Selection criteria.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; provided, that application of this criterion leaves an appropriate number of qualified firms, given the...) Unique situations exist involving prestige projects, such as the design of memorials and structures of...

  9. 48 CFR 36.602-1 - Selection criteria.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...; provided, that application of this criterion leaves an appropriate number of qualified firms, given the...) Unique situations exist involving prestige projects, such as the design of memorials and structures of...

  10. Application of color mixing for safety and quality inspection of agricultural products

    NASA Astrophysics Data System (ADS)

    Ding, Fujian; Chen, Yud-Ren; Chao, Kuanglin

    2005-11-01

    In this paper, color-mixing applications for food safety and quality was studied, including two-color mixing and three-color mixing. It was shown that the chromaticness of the visual signal resulting from two- or three-color mixing is directly related to the band ratio of light intensity at the two or three selected wavebands. An optical visual device using color mixing to implement the band ratio criterion was presented. Inspection through human vision assisted by an optical device that implements the band ratio criterion would offer flexibility and significant cost savings as compared to inspection with a multispectral machine vision system that implements the same criterion. Example applications of this optical color mixing technique were given for the inspection of chicken carcasses with various diseases and for the detection of chilling injury in cucumbers. Simulation results showed that discrimination by chromaticness that has a direct relation with band ratio can work very well with proper selection of the two or three narrow wavebands. This novel color mixing technique for visual inspection can be implemented on visual devices for a variety of applications, ranging from target detection to food safety inspection.

  11. An Efficiency Balanced Information Criterion for Item Selection in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…

  12. Calculus: Readings from the "Mathematics Teacher."

    ERIC Educational Resources Information Center

    Grinstein, Louise S., Ed.; Michaels, Brenda, Ed.

    Many of the suggestions that calculus instructors have published as articles from 1908 through 1973 are included in this book of readings. The main criterion for selecting an item was whether it would be helpful to teachers and students; therefore, those which dealt exclusively with curricular structure were not included. The selected articles are…

  13. Can local adaptation research in plants inform selection of native plant materials? An analysis of experimental methodologies

    USDA-ARS?s Scientific Manuscript database

    Local adaptation research in plants: limitations to synthetic understanding Local adaptation is used as a criterion to select plant materials that will display high fitness in new environments. A large body of research has explored local adaptation in plants, however, to what extent findings can inf...

  14. Teaching Viewed through Student Performance and Selected Effectiveness Factors.

    ERIC Educational Resources Information Center

    Papandreou, Andreas P.

    The degree of influence of selected factors upon effective teaching is investigated, as perceived by students through the criterion of their academic performance. A 2-part questionnaire was developed and presented to 528 graduating high school students in Cyprus in 1994-95. Part 1 consisted of four questions on student gender, academic…

  15. 34 CFR 388.20 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... State unit in-service training plan responds to needs identified in their training needs assessment and... employment outcomes; and (iv) The State has conducted a needs assessment of the in-service training needs for... Secretary uses the following additional selection criteria to evaluate an application: (a) Evidence of need...

  16. 34 CFR 388.20 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... State unit in-service training plan responds to needs identified in their training needs assessment and... employment outcomes; and (iv) The State has conducted a needs assessment of the in-service training needs for... Secretary uses the following additional selection criteria to evaluate an application: (a) Evidence of need...

  17. 34 CFR 388.20 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... State unit in-service training plan responds to needs identified in their training needs assessment and... employment outcomes; and (iv) The State has conducted a needs assessment of the in-service training needs for... Secretary uses the following additional selection criteria to evaluate an application: (a) Evidence of need...

  18. 34 CFR 388.20 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... State unit in-service training plan responds to needs identified in their training needs assessment and... employment outcomes; and (iv) The State has conducted a needs assessment of the in-service training needs for... Secretary uses the following additional selection criteria to evaluate an application: (a) Evidence of need...

  19. 34 CFR 388.20 - What additional selection criterion is used under this program?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... State unit in-service training plan responds to needs identified in their training needs assessment and... employment outcomes; and (iv) The State has conducted a needs assessment of the in-service training needs for... Secretary uses the following additional selection criteria to evaluate an application: (a) Evidence of need...

  20. Assessing the utility of phase-space-localized basis functions: Exploiting direct product structure and a new basis function selection procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less

  1. CA-125 AUC as a predictor for epithelial ovarian cancer relapse.

    PubMed

    Mano, António; Falcão, Amílcar; Godinho, Isabel; Santos, Jorge; Leitão, Fátima; de Oliveira, Carlos; Caramona, Margarida

    2008-01-01

    The aim of the present work was to evaluate the usefulness of CA-125 normalized in time area under the curve (CA-125 AUC) to signalise epithelial ovarian cancer relapse. Data from a hundred and eleven patients were submitted to two different approaches based on CA-125 AUC increase values to predict patient relapse. In Criterion A total CA-125 AUC normalized in time value (AUC(i)) was compared with the immediately previous one (AUC(i-1)) using the formulae AUC(i) > or = F * AUC(i-1) (several F values were tested) to find the appropriate close related increment associated to patient relapse. In Criterion B total CA-125 AUC normalised in time was calculated and several cut-off values were correlated with patient relapse prediction capacity. In Criterion A the best accuracy was achieved with a factor (F) of 1.25 (increment of 25% from the previous status), while in Criterion B the best accuracies were achieved with cut-offs of 25, 50, 75 and 100 IU/mL. The mean lead time to relapse achieved with Criterion A was 181 days, while with Criterion B they were, respectively, 131, 111, 63 and 11 days. Based on our results we believe that conjugation and sequential application of both criteria in patient relapse detection should be highly advisable. CA-125 AUC rapid burst in asymptomatic patients should be firstly evaluated using Criterion A with a high accuracy (0.85) and with a substantial mean lead time to relapse (181 days). If a negative answer was obtained then Criterion B should performed to confirm the absence of relapse.

  2. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  3. Improved targeted immunization strategies based on two rounds of selection

    NASA Astrophysics Data System (ADS)

    Xia, Ling-Ling; Song, Yu-Rong; Li, Chan-Chan; Jiang, Guo-Ping

    2018-04-01

    In the case of high degree targeted immunization where the number of vaccine is limited, when more than one node associated with the same degree meets the requirement of high degree centrality, how can we choose a certain number of nodes from those nodes, so that the number of immunized nodes will not exceed the limit? In this paper, we introduce a new idea derived from the selection process of second-round exam to solve this problem and then propose three improved targeted immunization strategies. In these proposed strategies, the immunized nodes are selected through two rounds of selection, where we increase the quotas of first-round selection according the evaluation criterion of degree centrality and then consider another characteristic parameter of node, such as node's clustering coefficient, betweenness and closeness, to help choose targeted nodes in the second-round selection. To validate the effectiveness of the proposed strategies, we compare them with the degree immunizations including the high degree targeted and the high degree adaptive immunizations using two metrics: the size of the largest connected component of immunized network and the number of infected nodes. Simulation results demonstrate that the proposed strategies based on two rounds of sorting are effective for heterogeneous networks and their immunization effects are better than that of the degree immunizations.

  4. Universal first-order reliability concept applied to semistatic structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  5. Universal first-order reliability concept applied to semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-07-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  6. Yeh-Stratton Criterion for Stress Concentrations on Fiber-Reinforced Composite Materials

    NASA Technical Reports Server (NTRS)

    Yeh, Hsien-Yang; Richards, W. Lance

    1996-01-01

    This study investigated the Yeh-Stratton Failure Criterion with the stress concentrations on fiber-reinforced composites materials under tensile stresses. The Yeh-Stratton Failure Criterion was developed from the initial yielding of materials based on macromechanics. To investigate this criterion, the influence of the materials anisotropic properties and far field loading on the composite materials with central hole and normal crack were studied. Special emphasis was placed on defining the crack tip stress fields and their applications. The study of Yeh-Stratton criterion for damage zone stress fields on fiber-reinforced composites under tensile loading was compared with several fracture criteria; Tsai-Wu Theory, Hoffman Theory, Fischer Theory, and Cowin Theory. Theoretical predictions from these criteria are examined using experimental results.

  7. The use of Landsat digital data to detect and monitor vegetation water deficiencies

    NASA Technical Reports Server (NTRS)

    Thompson, D. R.; Wehmanen, O. A.

    1977-01-01

    In the Large Area Crop Inventory Experiment a technique was devised using a vector transformation of Landsat digital data to indicate when vegetation is undergoing moisture stress. A relation was established between the remote-sensing-based criterion (the Green Index Number) and a ground-based criterion (Crop Moisture Index).

  8. Meta-Analysis of Criterion Validity for Curriculum-Based Measurement in Written Language

    ERIC Educational Resources Information Center

    Romig, John Elwood; Therrien, William J.; Lloyd, John W.

    2017-01-01

    We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…

  9. The free growth criterion for grain initiation in TiB 2 inoculated γ-titanium aluminide based alloys

    NASA Astrophysics Data System (ADS)

    Gosslar, D.; Günther, R.

    2014-02-01

    γ-titanium aluminide (γ-TiAl) based alloys enable for the design of light-weight and high-temperature resistant engine components. This work centers on a numerical study of the condition for grain initiation during solidification of TiB2 inoculated γ-TiAl based alloys. Grain initiation is treated according to the so-called free growth criterion. This means that the free growth barrier for grain initiation is determined by the maximum interfacial mean curvature between a nucleus and the melt. The strategy presented in this paper relies on iteratively increasing the volume of a nucleus, which partially wets a hexagonal TiB2 crystal, minimizing the interfacial energy and calculating the corresponding interfacial curvature. The hereby obtained maximum curvature yields a scaling relation between the size of TiB2 crystals and the free growth barrier. Comparison to a prototypical TiB2 crystal in an as cast γ-TiAl based alloy allowed then to predict the free growth barrier prevailing under experimental conditions. The validity of the free growth criterion is discussed by an interfacial energy criterion.

  10. Some fuzzy techniques for staff selection process: A survey

    NASA Astrophysics Data System (ADS)

    Md Saad, R.; Ahmad, M. Z.; Abu, M. S.; Jusoh, M. S.

    2013-04-01

    With high level of business competition, it is vital to have flexible staff that are able to adapt themselves with work circumstances. However, staff selection process is not an easy task to be solved, even when it is tackled in a simplified version containing only a single criterion and a homogeneous skill. When multiple criteria and various skills are involved, the problem becomes much more complicated. In adddition, there are some information that could not be measured precisely. This is patently obvious when dealing with opinions, thoughts, feelings, believes, etc. One possible tool to handle this issue is by using fuzzy set theory. Therefore, the objective of this paper is to review the existing fuzzy techniques for solving staff selection process. It classifies several existing research methods and identifies areas where there is a gap and need further research. Finally, this paper concludes by suggesting new ideas for future research based on the gaps identified.

  11. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  12. Decision-making patterns for dietary supplement purchases among women aged 25 to 45 years.

    PubMed

    Miller, Carla K; Russell, Teri; Kissling, Grace

    2003-11-01

    Women frequently consume dietary supplements but the criteria used to select supplements have received little investigation. This research identified the decision-making criteria used for dietary supplements among women aged 25 to 45 years who consumed a supplement at least four times per week. Participants (N=51) completed an in-store shopping interview that was audiotaped, transcribed, and analyzed qualitatively for the criteria used to make supplement selections. Qualitative analysis revealed 10 key criteria and the number of times each person used each criterion was quantified. Cluster analysis identified five homogeneous subgroups of participants based on the criteria used. These included brand shopper, bargain shopper, quality shopper, convenience shopper, and information gatherer. Supplement users vary in the criteria used to make point-of-purchase supplement selections. Dietetics professionals can classify supplement users according to the criteria used to tailor their nutrition counseling and better meet the educational needs of consumers.

  13. Physical mechanism and numerical simulation of the inception of the lightning upward leader

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Qingmin; Lu Xinchang; Shi Wei

    2012-12-15

    The upward leader is a key physical process of the leader progression model of lightning shielding. The inception mechanism and criterion of the upward leader need further understanding and clarification. Based on leader discharge theory, this paper proposes the critical electric field intensity of the stable upward leader (CEFISUL) and characterizes it by the valve electric field intensity on the conductor surface, E{sub L}, which is the basis of a new inception criterion for the upward leader. Through numerical simulation under various physical conditions, we verified that E{sub L} is mainly related to the conductor radius, and data fitting yieldsmore » the mathematical expression of E{sub L}. We further establish a computational model for lightning shielding performance of the transmission lines based on the proposed CEFISUL criterion, which reproduces the shielding failure rate of typical UHV transmission lines. The model-based calculation results agree well with the statistical data from on-site operations, which show the effectiveness and validity of the CEFISUL criterion.« less

  14. Multi-object detection and tracking technology based on hexagonal opto-electronic detector

    NASA Astrophysics Data System (ADS)

    Song, Yong; Hao, Qun; Li, Xiang

    2008-02-01

    A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.

  15. The Research Diagnostic Criteria for Temporomandibular Disorders. I: overview and methodology for assessment of validity.

    PubMed

    Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O

    2010-01-01

    The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.

  16. Selecting AGN through Variability in SN Datasets

    NASA Astrophysics Data System (ADS)

    Boutsia, K.; Leibundgut, B.; Trevese, D.; Vagnetti, F.

    2010-07-01

    Variability is a main property of Active Galactic Nuclei (AGN) and it was adopted as a selection criterion using multi epoch surveys conducted for the detection of supernovae (SNe). We have used two SN datasets. First we selected the AXAF field of the STRESS project, centered in the Chandra Deep Field South where, besides the deep X-ray surveys also various optical catalogs exist. Our method yielded 132 variable AGN candidates. We then extended our method including the dataset of the ESSENCE project that has been active for 6 years, producing high quality light curves in the R and I bands. We obtained a sample of ˜4800 variable sources, down to R=22, in the whole 12 deg2 ESSENCE field. Among them, a subsample of ˜500 high priority AGN candidates was created using as secondary criterion the shape of the structure function. In a pilot spectroscopic run we have confirmed the AGN nature for nearly all of our candidates.

  17. Neutrality and evolvability of designed protein sequences

    NASA Astrophysics Data System (ADS)

    Bhattacherjee, Arnab; Biswas, Parbati

    2010-07-01

    The effect of foldability on protein’s evolvability is analyzed by a two-prong approach consisting of a self-consistent mean-field theory and Monte Carlo simulations. Theory and simulation models representing protein sequences with binary patterning of amino acid residues compatible with a particular foldability criteria are used. This generalized foldability criterion is derived using the high temperature cumulant expansion approximating the free energy of folding. The effect of cumulative point mutations on these designed proteins is studied under neutral condition. The robustness, protein’s ability to tolerate random point mutations is determined with a selective pressure of stability (ΔΔG) for the theory designed sequences, which are found to be more robust than that of Monte Carlo and mean-field-biased Monte Carlo generated sequences. The results show that this foldability criterion selects viable protein sequences more effectively compared to the Monte Carlo method, which has a marked effect on how the selective pressure shapes the evolutionary sequence space. These observations may impact de novo sequence design and its applications in protein engineering.

  18. Statistically Based Approach to Broadband Liner Design and Assessment

    NASA Technical Reports Server (NTRS)

    Jones, Michael G. (Inventor); Nark, Douglas M. (Inventor)

    2016-01-01

    A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.

  19. Algorithm of choosing type of mechanical assembly production of instrument making enterprises of Industry 4.0

    NASA Astrophysics Data System (ADS)

    Zakoldaev, D. A.; Shukalov, A. V.; Zharinov, I. O.; Zharinov, O. O.

    2018-05-01

    The task of the algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is being studied. There is a comparison of two project algorithms for Industry 3.0 and Industry 4.0. The algorithm of choosing the type of mechanical assembly production of instrument making enterprises of Industry 4.0 is based on the technological route analysis of the manufacturing process in a company equipped with cyber and physical systems. This algorithm may give some project solutions selected from the primary part or the auxiliary one of the production. The algorithm decisive rules are based on the optimal criterion.

  20. 36 CFR 18.7 - How are lease proposals solicited and selected if the Director issues a Request for Bids?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... solicited and selected if the Director issues a Request for Bids? 18.7 Section 18.7 Parks, Forests, and... § 18.7 How are lease proposals solicited and selected if the Director issues a Request for Bids? (a) If the amount of the rent is the only criterion for award of a lease, the Director may solicit bids...

  1. MID-INFRARED SELECTION OF ACTIVE GALACTIC NUCLEI WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER. I. CHARACTERIZING WISE-SELECTED ACTIVE GALACTIC NUCLEI IN COSMOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stern, Daniel; Assef, Roberto J.; Eisenhardt, Peter

    2012-07-01

    The Wide-field Infrared Survey Explorer (WISE) is an extremely capable and efficient black hole finder. We present a simple mid-infrared color criterion, W1 - W2 {>=} 0.8 (i.e., [3.4]-[4.6] {>=}0.8, Vega), which identifies 61.9 {+-} 5.4 active galactic nucleus (AGN) candidates per deg{sup 2} to a depth of W2 {approx} 15.0. This implies a much larger census of luminous AGNs than found by typical wide-area surveys, attributable to the fact that mid-infrared selection identifies both unobscured (type 1) and obscured (type 2) AGNs. Optical and soft X-ray surveys alone are highly biased toward only unobscured AGNs, while this simple WISEmore » selection likely identifies even heavily obscured, Compton-thick AGNs. Using deep, public data in the COSMOS field, we explore the properties of WISE-selected AGN candidates. At the mid-infrared depth considered, 160 {mu}Jy at 4.6 {mu}m, this simple criterion identifies 78% of Spitzer mid-infrared AGN candidates according to the criteria of Stern et al. and the reliability is 95%. We explore the demographics, multiwavelength properties and redshift distribution of WISE-selected AGN candidates in the COSMOS field.« less

  2. Criterion for estimation of stress-deformed state of SD-materials

    NASA Astrophysics Data System (ADS)

    Orekhov, Andrey V.

    2018-05-01

    A criterion is proposed that determines the moment when the growth pattern of the monotonic numerical sequence varies from the linear to the parabolic one. The criterion is based on the comparison of squares of errors for the linear and the incomplete quadratic approximation. The approximating functions are constructed locally, only at those points that are located near a possible change in nature of the increase in the sequence.

  3. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  4. Eyewitness identification in simultaneous and sequential lineups: an investigation of position effects using receiver operating characteristics.

    PubMed

    Meisters, Julia; Diedenhofen, Birk; Musch, Jochen

    2018-04-20

    For decades, sequential lineups have been considered superior to simultaneous lineups in the context of eyewitness identification. However, most of the research leading to this conclusion was based on the analysis of diagnosticity ratios that do not control for the respondent's response criterion. Recent research based on the analysis of ROC curves has found either equal discriminability for sequential and simultaneous lineups, or higher discriminability for simultaneous lineups. Some evidence for potential position effects and for criterion shifts in sequential lineups has also been reported. Using ROC curve analysis, we investigated the effects of the suspect's position on discriminability and response criteria in both simultaneous and sequential lineups. We found that sequential lineups suffered from an unwanted position effect. Respondents employed a strict criterion for the earliest lineup positions, and shifted to a more liberal criterion for later positions. No position effects and no criterion shifts were observed in simultaneous lineups. This result suggests that sequential lineups are not superior to simultaneous lineups, and may give rise to unwanted position effects that have to be considered when conducting police lineups.

  5. Eyewitness decisions in simultaneous and sequential lineups: a dual-process signal detection theory analysis.

    PubMed

    Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H

    2005-07-01

    Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.

  6. Construction of Lagrangians and Hamiltonians from the Equation of Motion

    ERIC Educational Resources Information Center

    Yan, C. C.

    1978-01-01

    Demonstrates that infinitely many Lagrangians and Hamiltonians can be constructed from a given equation of motion. Points out the lack of an established criterion for making a proper selection. (Author/GA)

  7. Comparison of Selection Procedures and Validation of Criterion Used in Selection of Significant Control Variates of a Simulation Model

    DTIC Science & Technology

    1990-03-01

    and M.H. Knuter. Applied Linear Regression Models. Homewood IL: Richard D. Erwin Inc., 1983. Pritsker, A. Alan B. Introduction to Simulation and SLAM...Control Variates in Simulation," European Journal of Operational Research, 42: (1989). Neter, J., W. Wasserman, and M.H. Xnuter. Applied Linear Regression Models

  8. A Comparison of Three Approaches to Correct for Direct and Indirect Range Restrictions: A Simulation Study

    ERIC Educational Resources Information Center

    Pfaffel, Andreas; Schober, Barbara; Spiel, Christiane

    2016-01-01

    A common methodological problem in the evaluation of the predictive validity of selection methods, e.g. in educational and employment selection, is that the correlation between predictor and criterion is biased. Thorndike's (1949) formulas are commonly used to correct for this biased correlation. An alternative approach is to view the selection…

  9. Diameter Growth of Selected Bottomland Hardwoods as Affected by Species and Site

    Treesearch

    Charles B. Briscoe

    1955-01-01

    As management is intensified in bottomland forests, efforts will be made to control species composition. One criterion for the selection of species to favor is growth rate, about which relatively little is known for bottomland species. This study was made to compare the relative growth rates of certain bottomland hardwood species in southern Louisiana.

  10. Optimizing the Use of Response Times for Item Selection in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua

    2018-01-01

    Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…

  11. Knowledge based system and decision making methodologies in materials selection for aircraft cabin metallic structures

    NASA Astrophysics Data System (ADS)

    Adhikari, Pashupati Raj

    Materials selection processes have been the most important aspects in product design and development. Knowledge-based system (KBS) and some of the methodologies used in the materials selection for the design of aircraft cabin metallic structures are discussed. Overall aircraft weight reduction means substantially less fuel consumption. Part of the solution to this problem is to find a way to reduce overall weight of metallic structures inside the cabin. Among various methodologies of materials selection using Multi Criterion Decision Making (MCDM) techniques, a few of them are demonstrated with examples and the results are compared with those obtained using Ashby's approach in materials selection. Pre-defined constraint values, mainly mechanical properties, are employed as relevant attributes in the process. Aluminum alloys with high strength-to-weight ratio have been second-to-none in most of the aircraft parts manufacturing. Magnesium alloys that are much lighter in weight as alternatives to the Al-alloys currently in use in the structures are tested using the methodologies and ranked results are compared. Each material attribute considered in the design are categorized as benefit and non-benefit attribute. Using Ashby's approach, material indices that are required to be maximized for an optimum performance are determined, and materials are ranked based on the average of consolidated indices ranking. Ranking results are compared for any disparity among the methodologies.

  12. Comparing the Construct and Criterion-Related Validity of Ability-Based and Mixed-Model Measures of Emotional Intelligence

    ERIC Educational Resources Information Center

    Livingstone, Holly A.; Day, Arla L.

    2005-01-01

    Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…

  13. Development of a Criterion-Referenced, Performance-Based Assessment of Reading Comprehension in a Whole Literacy Program.

    ERIC Educational Resources Information Center

    Tibbetts, Katherine A.; And Others

    This paper describes the development of a criterion-referenced, performance-based measure of third grade reading comprehension. The primary purpose of the assessment is to contribute unique and valid information for use in the formative evaluation of a whole literacy program. A secondary purpose is to supplement other program efforts to…

  14. Neuropathological diagnostic criteria for Alzheimer's disease.

    PubMed

    Murayama, Shigeo; Saito, Yuko

    2004-09-01

    Neuropathological diagnostic criteria for Alzheimer's disease (AD) are based on tau-related pathology: NFT or neuritic plaques (NP). The Consortium to Establish a Registry for Alzheimer's disease (CERAD) criterion evaluates the highest density of neocortical NP from 0 (none) to C (abundant). Clinical documentation of dementia and NP stage A in younger cases, B in young old cases and C in older cases fulfils the criterion of AD. The CERAD criterion is most frequently used in clinical outcome studies because of its inclusion of clinical information. Braak and Braak's criterion evaluates the density and distribution of NFT and classifies them into: I/II, entorhinal; III/IV, limbic; and V/VI, neocortical stage. These three stages correspond to normal cognition, cognitive impairment and dementia, respectively. As Braak's criterion is based on morphological evaluation of the brain alone, this criterion is usually adopted in the research setting. The National Institute for Aging and Ronald and Nancy Reagan Institute of the Alzheimer's Association criterion combines these two criteria and categorizes cases into NFT V/VI and NP C, NFT III/IV and NP B, and NFT I/II and NP A, corresponding to high, middle and low probability of AD, respectively. As most AD cases in the aged population are categorized into Braak tangle stage IV and CERAD stage C, the usefulness of this criterion has not yet been determined. The combination of Braak's NFT stage equal to or above IV and Braak's senile plaque Stage C provides, arguably, the highest sensitivity and specificity. In future, the criteria should include in vivo dynamic neuropathological data, including 3D MRI, PET scan and CSF biomarkers, as well as more sensitive and specific immunohistochemical and immunochemical grading of AD.

  15. Criterion-based laparoscopic training reduces total training time.

    PubMed

    Brinkman, Willem M; Buzink, Sonja N; Alevizos, Leonidas; de Hingh, Ignace H J T; Jakimowicz, Jack J

    2012-04-01

    The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. During four training sessions within 1 week (one session per day) 34 medical interns (no laparoscopic experience) practiced on two basic tasks on the Simbionix LAP Mentor virtual-reality (VR) simulator: 'clipping and grasping' and 'cutting'. Group C (criterion-based) (N = 17) trained to reach predefined criteria and stopped training in each session when these criteria were met, with a maximum training time of 1 h. Group T (time-based) (N = 17) trained for a fixed time of 1 h each session. Retention of skills was assessed 1 week after training. In addition, transferability of skills was established using the Haptica ProMIS augmented-reality simulator. Both groups improved their performance significantly over the course of the training sessions (Wilcoxon signed ranks, P < 0.05). Both groups showed skill transferability and skill retention. When comparing the performance parameters of group C and group T, their performances in the first, the last and the retention training sessions did not differ significantly (Mann-Whitney U test, P > 0.05). The average number of repetitions needed to meet the criteria also did not differ between the groups. Overall, group C spent less time training on the simulator than did group T (74:48 and 120:10 min, respectively; P < 0.001). Group C performed significantly fewer repetitions of each task, overall and in session 2, 3 and 4. Criterion-based training of basic laparoscopic skills can reduce the overall training time with no impact on training outcome, transferability or retention of skills. Criterion-based should be the training of choice in laparoscopic skills curricula.

  16. Path selection in the growth of rivers

    DOE PAGES

    Cohen, Yossi; Devauchelle, Olivier; Seybold, Hansjörg F.; ...

    2015-11-02

    River networks exhibit a complex ramified structure that has inspired decades of studies. But, an understanding of the propagation of a single stream remains elusive. In this paper, we invoke a criterion for path selection from fracture mechanics and apply it to the growth of streams in a diffusion field. We show that, as it cuts through the landscape, a stream maintains a symmetric groundwater flow around its tip. The local flow conditions therefore determine the growth of the drainage network. We use this principle to reconstruct the history of a network and to find a growth law associated withmore » it. Finally, our results show that the deterministic growth of a single channel based on its local environment can be used to characterize the structure of river networks.« less

  17. Influence of Time-Pickoff Circuit Parameters on LiDAR Range Precision

    PubMed Central

    Wang, Hongming; Yang, Bingwei; Huyan, Jiayue; Xu, Lijun

    2017-01-01

    A pulsed time-of-flight (TOF) measurement-based Light Detection and Ranging (LiDAR) system is more effective for medium-long range distances. As a key ranging unit, a time-pickoff circuit based on automatic gain control (AGC) and constant fraction discriminator (CFD) is designed to reduce the walk error and the timing jitter for obtaining the accurate time interval. Compared with Cramer–Rao lower bound (CRLB) and the estimation of the timing jitter, four parameters-based Monte Carlo simulations are established to show how the range precision is influenced by the parameters, including pulse amplitude, pulse width, attenuation fraction and delay time of the CFD. Experiments were carried out to verify the relationship between the range precision and three of the parameters, exclusing pulse width. It can be concluded that two parameters of the ranging circuit (attenuation fraction and delay time) were selected according to the ranging performance of the minimum pulse amplitude. The attenuation fraction should be selected in the range from 0.2 to 0.6 to achieve high range precision. The selection criterion of the time-pickoff circuit parameters is helpful for the ranging circuit design of TOF LiDAR system. PMID:29039772

  18. Technical issues affecting the implementation of US environmental protection agency's proposed fish tissue-based aquatic criterion for selenium

    Treesearch

    A. Dennis Lemly; Joseph P. Skorupa

    2007-01-01

    The US Environmental Protection Agency is developing a national water quality criterion for selenium that is based on concentrations of the element in fish tissue. Although this approach offers advantages over the current water-based regulations, it also presents new challenges with respect to implementation. A comprehensive protocol that answers the ‘‘what, where, and...

  19. Construction of edge cracks pre-criterion model based on hot rolling experiment and simulation of AZ31 magnesium alloy

    NASA Astrophysics Data System (ADS)

    Ning, Fangkun; Jia, Weitao; Hou, Jian; Chen, Xingrui; Le, Qichi

    2018-05-01

    Various fracture criteria, especially Johnson and Cook (J-C) model and (normalized) Cockcroft and Latham (C-L) criterion were contrasted and discussed. Based on normalized C-L criterion, adopted in this paper, FE simulation was carried out and hot rolling experiments under temperature range of 200 °C–350 °C, rolling reduction rate of 25%–40% and rolling speed from 7–21 r/min was implemented. The microstructure was observed by optical microscope and damage values of simulation results were contrasted with the length of cracks on diverse parameters. The results show that the plate generated less edge cracks and the microstructure emerged slight shear bands and fine dynamic recrystallization grains rolled at 350 °C, 40% reduction and 14 r/min. The edge cracks pre-criterion model was obtained combined with Zener-Hollomon equation and deformation activation energy.

  20. Predicting operator workload during system design

    NASA Technical Reports Server (NTRS)

    Aldrich, Theodore B.; Szabo, Sandra M.

    1988-01-01

    A workload prediction methodology was developed in response to the need to measure workloads associated with operation of advanced aircraft. The application of the methodology will involve: (1) conducting mission/task analyses of critical mission segments and assigning estimates of workload for the sensory, cognitive, and psychomotor workload components of each task identified; (2) developing computer-based workload prediction models using the task analysis data; and (3) exercising the computer models to produce predictions of crew workload under varying automation and/or crew configurations. Critical issues include reliability and validity of workload predictors and selection of appropriate criterion measures.

  1. Bayesian transformation cure frailty models with multivariate failure time data.

    PubMed

    Yin, Guosheng

    2008-12-10

    We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.

  2. Thermally induced secondary atomization of droplet in an acoustic field

    NASA Astrophysics Data System (ADS)

    Basu, Saptarshi; Saha, Abhishek; Kumar, Ranganathan

    2012-01-01

    We study the thermal effects that lead to instability and break up in acoustically levitated vaporizing fuel droplets. For selective liquids, atomization occurs at the droplet equator under external heating. Short wavelength [Kelvin-Helmholtz (KH)] instability for diesel and bio-diesel droplets triggers this secondary atomization. Vapor pressure, latent heat, and specific heat govern the vaporization rate and temperature history, which affect the surface tension gradient and gas phase density, ultimately dictating the onset of KH instability. We develop a criterion based on Weber number to define a condition for the inception of secondary atomization.

  3. Size-sensitive sorting of microparticles through control of flow geometry

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Jalikop, Shreyas V.; Hilgenfeldt, Sascha

    2011-07-01

    We demonstrate a general concept of flow manipulation in microfluidic environments, based on controlling the shape and position of flow domains in order to force switching and sorting of microparticles without moving parts or changes in design geometry. Using microbubble acoustic streaming, we show that regulation of the relative strength of streaming and a superimposed Poiseuille flow allows for size-selective trapping and releasing of particles, with particle size sensitivity much greater than what is imposed by the length scales of microfabrication. A simple criterion allows for quantitative tuning of microfluidic devices for switching and sorting of particles of desired size.

  4. Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion

    NASA Astrophysics Data System (ADS)

    Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar

    2014-06-01

    In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.

  5. Formalization of the engineering science discipline - knowledge engineering

    NASA Astrophysics Data System (ADS)

    Peng, Xiao

    Knowledge is the most precious ingredient facilitating aerospace engineering research and product development activities. Currently, the most common knowledge retention methods are paper-based documents, such as reports, books and journals. However, those media have innate weaknesses. For example, four generations of flying wing aircraft (Horten, Northrop XB-35/YB-49, Boeing BWB and many others) were mostly developed in isolation. The subsequent engineers were not aware of the previous developments, because these projects were documented such which prevented the next generation of engineers to benefit from the previous lessons learned. In this manner, inefficient knowledge retention methods have become a primary obstacle for knowledge transfer from the experienced to the next generation of engineers. In addition, the quality of knowledge itself is a vital criterion; thus, an accurate measure of the quality of 'knowledge' is required. Although qualitative knowledge evaluation criteria have been researched in other disciplines, such as the AAA criterion by Ernest Sosa stemming from the field of philosophy, a quantitative knowledge evaluation criterion needs to be developed which is capable to numerically determine the qualities of knowledge for aerospace engineering research and product development activities. To provide engineers with a high-quality knowledge management tool, the engineering science discipline Knowledge Engineering has been formalized to systematically address knowledge retention issues. This research undertaking formalizes Knowledge Engineering as follows: 1. Categorize knowledge according to its formats and representations for the first time, which serves as the foundation for the subsequent knowledge management function development. 2. Develop an efficiency evaluation criterion for knowledge management by analyzing the characteristics of both knowledge and the parties involved in the knowledge management processes. 3. Propose and develop an innovative Knowledge-Based System (KBS), AVD KBS, forming a systematic approach facilitating knowledge management. 4. Demonstrate the efficiency advantages of AVDKBS over traditional knowledge management methods via selected design case studies. This research formalizes, for the first time, Knowledge Engineering as a distinct discipline by delivering a robust and high-quality knowledge management and process tool, AVDKBS. Formalizing knowledge proves to significantly impact the effectiveness of aerospace knowledge retention and utilization.

  6. Selecting the minimum prediction base of historical data to perform 5-year predictions of the cancer burden: The GoF-optimal method.

    PubMed

    Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon

    2015-06-01

    Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. An expert system for planning and scheduling in a telerobotic environment

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.; Park, Eui H.

    1991-01-01

    A knowledge based approach to assigning tasks to multi-agents working cooperatively in jobs that require a telerobot in the loop was developed. The generality of the approach allows for such a concept to be applied in a nonteleoperational domain. The planning architecture known as the task oriented planner (TOP) uses the principle of flow mechanism and the concept of planning by deliberation to preserve and use knowledge about a particular task. The TOP is an open ended architecture developed with a NEXPERT expert system shell and its knowledge organization allows for indirect consultation at various levels of task abstraction. Considering that a telerobot operates in a hostile and nonstructured environment, task scheduling should respond to environmental changes. A general heuristic was developed for scheduling jobs with the TOP system. The technique is not to optimize a given scheduling criterion as in classical job and/or flow shop problems. For a teleoperation job schedule, criteria are situation dependent. A criterion selection is fuzzily embedded in the task-skill matrix computation. However, goal achievement with minimum expected risk to the human operator is emphasized.

  8. Design forms of total knee replacement.

    PubMed

    Walker, P S; Sathasivam, S

    2000-01-01

    The starting point of this article is a general design criterion applicable to all types of total knee replacement. This criterion is then expanded upon to provide more specifics of the required kinematics, and the forces which the total knee must sustain. A characteristic which differentiates total knees is the amount of constraint which is required, and whether the constraint is translational or rotational. The different forms of total knee replacement are described in terms of these constraints, starting with the least constrained unicompartments to the almost fully constrained fixed and rotating hinges. Much attention is given to the range of designs in between these two extreme types, because they constitute by far the largest in usage. This category includes condylar replacements where the cruciate ligaments are preserved or resected, posterior cruciate substituting designs and mobile bearing knees. A new term, 'guided motion knees', is applied to the growing number of designs which control the kinematics by the use of intercondylar cams or specially shaped and even additional bearing surfaces. The final section deals with the selection of an appropriate design of total knee for specific indications based on the design characteristics.

  9. Technical Guidance for Conducting ASVAB Validation/Standards Studies in the U.S. Navy

    DTIC Science & Technology

    2015-02-01

    the criterion), we can compute the variance of X in the unrestricted group, 2xS , and in the restricted (selected) group, 2 xs . 3 In contrast, we...well as the selected group, 2 xs . We also know the variance of Y in the selected group, 2ys , and the correlation of X and Y in the selected...and AS. Five levels of selection ratio (1.0, .8, .6, .4, and .2) and eight sample sizes (50, 75, 100, 150, 225, 350 , 500, and 800) were considered

  10. Evaluating the Influence of the Microsatellite Marker Set on the Genetic Structure Inferred in Pyrus communis L.

    PubMed Central

    Urrestarazu, Jorge; Royo, José B.; Santesteban, Luis G.; Miranda, Carlos

    2015-01-01

    Fingerprinting information can be used to elucidate in a robust manner the genetic structure of germplasm collections, allowing a more rational and fine assessment of genetic resources. Bayesian model-based approaches are nowadays majorly preferred to infer genetic structure, but it is still largely unresolved how marker sets should be built in order to obtain a robust inference. The objective was to evaluate, in Pyrus germplasm collections, the influence of the SSR marker set size on the genetic structure inferred, also evaluating the influence of the criterion used to select those markers. Inferences were performed considering an increasing number of SSR markers that ranged from just two up to 25, incorporated one at a time into the analysis. The influence of the number of SSR markers used was evaluated comparing the number of populations and the strength of the signal detected, and also the similarity of the genotype assignments to populations between analyses. In order to test if those results were influenced by the criterion used to select the SSRs, several choosing scenarios based on the discrimination power or the fixation index values of the SSRs were tested. Our results indicate that population structure could be inferred accurately once a certain SSR number threshold was reached, which depended on the underlying structure within the genotypes, but the method used to select the markers included on each set appeared not to be very relevant. The minimum number of SSRs required to provide robust structure inferences and adequate measurements of the differentiation, even when low differentiation levels exist within populations, was proved similar to that of the complete list of recommended markers for fingerprinting. When a SSR set size similar to the minimum marker sets recommended for fingerprinting it is used, only major divisions or moderate (F ST>0.05) differentiation of the germplasm are detected. PMID:26382618

  11. Comparison of computer versus manual determination of pulmonary nodule volumes in CT scans

    NASA Astrophysics Data System (ADS)

    Biancardi, Alberto M.; Reeves, Anthony P.; Jirapatnakul, Artit C.; Apanasovitch, Tatiyana; Yankelevitz, David; Henschke, Claudia I.

    2008-03-01

    Accurate nodule volume estimation is necessary in order to estimate the clinically relevant growth rate or change in size over time. An automated nodule volume-measuring algorithm was applied to a set of pulmonary nodules that were documented by the Lung Image Database Consortium (LIDC). The LIDC process model specifies that each scan is assessed by four experienced thoracic radiologists and that boundaries are to be marked around the visible extent of the nodules for nodules 3 mm and larger. Nodules were selected from the LIDC database with the following inclusion criteria: (a) they must have a solid component on a minimum of three CT image slices and (b) they must be marked by all four LIDC radiologists. A total of 113 nodules met the selection criterion with diameters ranging from 3.59 mm to 32.68 mm (mean 9.37 mm, median 7.67 mm). The centroid of each marked nodule was used as the seed point for the automated algorithm. 95 nodules (84.1%) were correctly segmented, but one was considered not meeting the first selection criterion by the automated method; for the remaining ones, eight (7.1%) were structurally too complex or extensively attached and 10 (8.8%) were considered not properly segmented after a simple visual inspection by a radiologist. Since the LIDC specifications, as aforementioned, instruct radiologists to include both solid and sub-solid parts, the automated method core capability of segmenting solid tissues was augmented to take into account also the nodule sub-solid parts. We ranked the distances of the automated method estimates and the radiologist-based estimates from the median of the radiologist-based values. The automated method was in 76.6% of the cases closer to the median than at least one of the values derived from the manual markings, which is a sign of a very good agreement with the radiologists' markings.

  12. The Implementation of Analytical Hierarchy Process Method for Outstanding Achievement Scholarship Reception Selection at Universal University of Batam

    NASA Astrophysics Data System (ADS)

    Marfuah; Widiantoro, Suryo

    2017-12-01

    Universal University of Batam offers outstanding achievement scholarship to the current students to be each year of new academic year, seeing the large number of new Students who are interested to get it then the selection team should be able to filter and choose the eligible ones. The selection process starting with evaluation and judgement made by the experts. There were five criteria as the basic of selection and each had three alternatives that must be considered. Based on the policy of University the maximum number of recipients are five for each of six study programs. Those programs are art of music, dance, industrial engineering, environmental engineering, telecommunication engineering, and software engineering. The expert choice was subjective that AHP method was used to help in making decision consistently by doing pairwise comparison matrix process between criteria based on selected alternatives, by determining the priority order of criteria and alternatives used. The results of these calculations were used as supporting decision-making to determine the eligible students receiving scholarships based on alternatives of selected criteria determined by the final results of AHP method calculation with the priority criterion A (0.37%), C (0.23%), E (0.21%), D (0.14%) and B (0.06%), value of consistency ratio 0.05. Then the alternative priorities 1 (0.63), 2 (0.26) and 3 (0.11) the consistency ratio values 0.03, where each CR ≤ 0.1 or consistent weighting preference.

  13. Design of surface modifications for nanoscale sensor applications.

    PubMed

    Reimhult, Erik; Höök, Fredrik

    2015-01-14

    Nanoscale biosensors provide the possibility to miniaturize optic, acoustic and electric sensors to the dimensions of biomolecules. This enables approaching single-molecule detection and new sensing modalities that probe molecular conformation. Nanoscale sensors are predominantly surface-based and label-free to exploit inherent advantages of physical phenomena allowing high sensitivity without distortive labeling. There are three main criteria to be optimized in the design of surface-based and label-free biosensors: (i) the biomolecules of interest must bind with high affinity and selectively to the sensitive area; (ii) the biomolecules must be efficiently transported from the bulk solution to the sensor; and (iii) the transducer concept must be sufficiently sensitive to detect low coverage of captured biomolecules within reasonable time scales. The majority of literature on nanoscale biosensors deals with the third criterion while implicitly assuming that solutions developed for macroscale biosensors to the first two, equally important, criteria are applicable also to nanoscale sensors. We focus on providing an introduction to and perspectives on the advanced concepts for surface functionalization of biosensors with nanosized sensor elements that have been developed over the past decades (criterion (iii)). We review in detail how patterning of molecular films designed to control interactions of biomolecules with nanoscale biosensor surfaces creates new possibilities as well as new challenges.

  14. Design of Surface Modifications for Nanoscale Sensor Applications

    PubMed Central

    Reimhult, Erik; Höök, Fredrik

    2015-01-01

    Nanoscale biosensors provide the possibility to miniaturize optic, acoustic and electric sensors to the dimensions of biomolecules. This enables approaching single-molecule detection and new sensing modalities that probe molecular conformation. Nanoscale sensors are predominantly surface-based and label-free to exploit inherent advantages of physical phenomena allowing high sensitivity without distortive labeling. There are three main criteria to be optimized in the design of surface-based and label-free biosensors: (i) the biomolecules of interest must bind with high affinity and selectively to the sensitive area; (ii) the biomolecules must be efficiently transported from the bulk solution to the sensor; and (iii) the transducer concept must be sufficiently sensitive to detect low coverage of captured biomolecules within reasonable time scales. The majority of literature on nanoscale biosensors deals with the third criterion while implicitly assuming that solutions developed for macroscale biosensors to the first two, equally important, criteria are applicable also to nanoscale sensors. We focus on providing an introduction to and perspectives on the advanced concepts for surface functionalization of biosensors with nanosized sensor elements that have been developed over the past decades (criterion (iii)). We review in detail how patterning of molecular films designed to control interactions of biomolecules with nanoscale biosensor surfaces creates new possibilities as well as new challenges. PMID:25594599

  15. Further development and construct validation of MMPI-2-RF indices of global psychopathy, fearless-dominance, and impulsive-antisociality in a sample of incarcerated women.

    PubMed

    Phillips, Tasha R; Sellbom, Martin; Ben-Porath, Yossef S; Patrick, Christopher J

    2014-02-01

    Replicating and extending research by Sellbom et al. (M. Sellbom, Y. S. Ben-Porath, C. J. Patrick, D. B. Wygant, D. M. Gartland, & K. P. Stafford, 2012, Development and Construct Validation of the MMPI-2-RF Measures of Global Psychopathy, Fearless-Dominance, and Impulsive-Antisociality, Personality Disorders: Theory, Research, and Treatment, 3, 17-38), the current study examined the criterion-related validity of three self-report indices of psychopathy that were derived from scores on the Minnesota Multiphasic Personality Inventory (MMPI)-2-Restructured Form (MMPI-2-RF; Y. S. Ben-Porath & A. Tellegen, 2008, Minnesota Multiphasic Personality Inventory-2-Restructured Form: Manual for Administration, Scoring, and Interpretation, Minneapolis, MN: University of Minnesota Press). We estimated psychopathy indices by regressing scores from the Psychopathic Personality Inventory (PPI; S. O. Lilienfeld & B. P. Andrews, 1996, Development and Preliminary Validation of a Self-Report Measure of Psychopathic Personality Traits in Noncriminal Populations, Journal of Personality Assessment, 66, 488-524) and its two distinct facets, Fearless-Dominance and Impulsive-Antisociality, onto conceptually selected MMPI-2-RF scales. Data for a newly collected sample of 230 incarcerated women were combined with existing data from Sellbom et al.'s (2012) male correctional and mixed-gender college samples to establish regression equations with optimal generalizability. Correlation and regression analyses were then used to examine associations between the MMPI-2-RF-based estimates of PPI psychopathy and criterion measures (i.e., other well-established measures of psychopathy and conceptually related personality traits), and to evaluate whether gender moderated these associations. The MMPI-2-RF-based psychopathy indices correlated as expected with criterion measures and showed only one significant moderating effect for gender, namely, in the association between psychopathy and narcissism. These results provide further support for the validity of the MMPI-2-RF-based estimates of PPI psychopathy, and encourage their use in research and clinical contexts.

  16. Using the costs of drug therapy to screen patients for a community pharmacy-based medication review program.

    PubMed

    Krähenbühl, Jean-Marc; Decollogny, Anne; Bugnon, Olivier

    2008-12-01

    To measure the positive predictive value (PPV) of the cost of drug therapy (threshold = 2000 Swiss francs [CHF], US$1440, 1360) as a screening criterion for identifying patients who may benefit from medication review (MR). To describe identified drug-related problems (DRPs) and expense problems (EPs), and to estimate potential savings if all recommendations were accepted. Five voluntary Swiss community pharmacies. Of 12,680 patients, 592 (4.7%) had drug therapy costs exceeding 2000 CHF over a six-month period from July 1 to December 31, 2002. This threshold limit was set to identify high-risk patients for DRPs and EPs. Three pharmacists consecutively conducted a medication review based on the pharmaceutical charts of 125 sampled patients who met the inclusion criterion. The PPV of a threshold of 2000 CHF for identifying patients who might benefit from a MR: true positives were patients with at least one DRP, while false positives were patients with no DRP. The selection based on this criterion had a PPV of 86% for detecting patients with at least one DRP and 95% if EPs were also considered. There was a mean of 2.64 (SD = 2.20) DRPs per patient and a mean of 2.14 (SD = 1.39) EPs per patient. Of these patients, 90% were over 65 years old or were treated with at least five chronic medications, two common criteria for identifying patients at risk of DRPs. The main types of DRPs were drug-drug interactions, compliance problems and duplicate drugs. Mean daily drug cost per patient was CHF 14.87 (US$10.70, 10.10). A potential savings of CHF 1.67 (US$1.20, 1.14) per day (11%) was estimated if all recommendations to solve DRPs and EPs suggested herein were implemented. Further studies should investigate whether the potential benefit of medication reviews in preventing DRPs and containing costs in this patient group can be confirmed in a real practice environment.

  17. Criterion-Related Validity of Two Curriculum-Based Measures of Mathematical Skill in Relation to Reading Comprehension in Secondary Students

    ERIC Educational Resources Information Center

    Anselmo, Giancarlo A.; Yarbrough, Jamie L.; Kovaleski, Joseph F.; Tran, Vi N.

    2017-01-01

    This study analyzed the relationship between benchmark scores from two curriculum-based measurement probes in mathematics (M-CBM) and student performance on a state-mandated high-stakes test. Participants were 298 students enrolled in grades 7 and 8 in a rural southeastern school. Specifically, we calculated the criterion-related and predictive…

  18. Corrections to the Eckhaus' stability criterion for one-dimensional stationary structures

    NASA Astrophysics Data System (ADS)

    Malomed, B. A.; Staroselsky, I. E.; Konstantinov, A. B.

    1989-01-01

    Two amendments to the well-known Eckhaus' stability criterion for small-amplitude non-linear structures generated by weak instability of a spatially uniform state of a non-equilibrium one-dimensional system against small perturbations with finite wavelengths are obtained. Firstly, we evaluate small corrections to the main Eckhaus' term which, on the contrary so that term, do not have a universal form. Comparison of those non-universal corrections with experimental or numerical results gives a possibility to select a more relevant form of an effective nonlinear evolution equation. In particular, the comparison with such results for convective rolls and Taylor vortices gives arguments in favor of the Swift-Hohenberg equation. Secondly, we derive an analog of the Eckhaus criterion for systems degenerate in the sense that in an expansion of their non-linear parts in powers of dynamical variables, the second and third degree terms are absent.

  19. A heuristic multi-criteria classification approach incorporating data quality information for choropleth mapping

    PubMed Central

    Sun, Min; Wong, David; Kronenfeld, Barry

    2016-01-01

    Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426

  20. Correlates of the MMPI-2-RF in a college setting.

    PubMed

    Forbey, Johnathan D; Lee, Tayla T C; Handel, Richard W

    2010-12-01

    The current study examined empirical correlates of scores on Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; A. Tellegen & Y. S. Ben-Porath, 2008; Y. S. Ben-Porath & A. Tellegen, 2008) scales in a college setting. The MMPI-2-RF and six criterion measures (assessing anger, assertiveness, sex roles, cognitive failures, social avoidance, and social fear) were administered to 846 college students (nmen = 264, nwomen = 582) to examine the convergent and discriminant validity of scores on the MMPI-2-RF Specific Problems and Interest scales. Results demonstrated evidence of generally good convergent score validity for the selected MMPI-2-RF scales, reflected in large effect size correlations with criterion measure scores. Further, MMPI-2-RF scale scores demonstrated adequate discriminant validity, reflected in relatively low comparative median correlations between scores on MMPI-2-RF substantive scale sets and criterion measures. Limitations and future directions are discussed.

  1. New segmentation-based tone mapping algorithm for high dynamic range image

    NASA Astrophysics Data System (ADS)

    Duan, Weiwei; Guo, Huinan; Zhou, Zuofeng; Huang, Huimin; Cao, Jianzhong

    2017-07-01

    The traditional tone mapping algorithm for the display of high dynamic range (HDR) image has the drawback of losing the impression of brightness, contrast and color information. To overcome this phenomenon, we propose a new tone mapping algorithm based on dividing the image into different exposure regions in this paper. Firstly, the over-exposure region is determined using the Local Binary Pattern information of HDR image. Then, based on the peak and average gray of the histogram, the under-exposure and normal-exposure region of HDR image are selected separately. Finally, the different exposure regions are mapped by differentiated tone mapping methods to get the final result. The experiment results show that the proposed algorithm achieve the better performance both in visual quality and objective contrast criterion than other algorithms.

  2. Reproductive medicine: the ethical issues in the twenty-first century.

    PubMed

    Campbell, Alastair V

    2002-02-01

    Reproductive medicine has developed to such an extent that numerous moral questions arise about the boundaries of applications of new reproductive technology. It is possible to imagine a future in which 'designer babies' are created and in which cloning, sex selection and male pregnancy become the instruments of individual desire or social policy. In this article, the concept of 'natural' is explored but rejected as an insufficient moral criterion for deciding these complex questions. A case is made for the criterion of welfare of the child and for the concept of the child as gift rather than product.

  3. A Thomistic defense of whole-brain death.

    PubMed

    Eberl, Jason T

    2015-08-01

    Michel Accad critiques the currently accepted whole-brain criterion for determining the death of a human being from a Thomistic metaphysical perspective and, in so doing, raises objections to a particular argument defending the whole-brain criterion by Patrick Lee and Germain Grisez. In this paper, I will respond to Accad's critique of the whole-brain criterion and defend its continued validity as a criterion for determining when a human being's death has occurred in accord with Thomistic metaphysical principles. I will, however, join Accad in criticizing Lee and Grisez's proposed defense of the whole-brain criterion as potentially leading to erroneous conclusions regarding the determination of human death. Lay summary: Catholic physicians and bioethicists currently debate the legally accepted clinical standard for determining when a human being has died-known as the "wholebrain criterion"-which has also been morally affirmed by the Magisterium. This paper responds to physician Michel Accad's critique of the whole-brain criterion based upon St. Thomas Aquinas's metaphysical account of human nature as a union of a rational soul and a material body. I defend the whole-brain criterion from the same Thomistic philosophical perspective, while agreeing with Accad's objection to an alternative Thomistic defense of whole-brain death by philosophers Patrick Lee and Germain Grisez.

  4. Thermo-solutal and kinetic modes of stable dendritic growth with different symmetries of crystalline anisotropy in the presence of convection

    NASA Astrophysics Data System (ADS)

    Alexandrov, Dmitri V.; Galenko, Peter K.; Toropova, Lyubov V.

    2018-01-01

    Motivated by important applications in materials science and geophysics, we consider the steady-state growth of anisotropic needle-like dendrites in undercooled binary mixtures with a forced convective flow. We analyse the stable mode of dendritic evolution in the case of small anisotropies of growth kinetics and surface energy for arbitrary Péclet numbers and n-fold symmetry of dendritic crystals. On the basis of solvability and stability theories, we formulate a selection criterion giving a stable combination between dendrite tip diameter and tip velocity. A set of nonlinear equations consisting of the solvability criterion and undercooling balance is solved analytically for the tip velocity V and tip diameter ρ of dendrites with n-fold symmetry in the absence of convective flow. The case of convective heat and mass transfer mechanisms in a binary mixture occurring as a result of intensive flows in the liquid phase is detailed. A selection criterion that describes such solidification conditions is derived. The theory under consideration comprises previously considered theoretical approaches and results as limiting cases. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  5. Three-color mixing for classifying agricultural products for safety and quality

    NASA Astrophysics Data System (ADS)

    Ding, Fujian; Chen, Yud-Ren; Chao, Kuanglin; Kim, Moon S.

    2006-05-01

    A three-color mixing application for food safety inspection is presented. It is shown that the chromaticness of the visual signal resulting from the three-color mixing achieved through our device is directly related to the three-band ratio of light intensity at three selected wavebands. An optical visual device using three-color mixing to implement the three-band ratio criterion is presented. Inspection through human vision assisted by an optical device that implements the three-band ratio criterion would offer flexibility and significant cost savings as compared to inspection with a multispectral machine vision system that implements the same criterion. Example applications of this optical three-color mixing technique are given for the inspection of chicken carcasses with various diseases and for apples with fecal contamination. With proper selection of the three narrow wavebands, discrimination by chromaticness that has a direct relation with the three-band ratio can work very well. In particular, compared with the previously presented two-color mixing application, the conditions of chicken carcasses were more easily identified using the three-color mixing application. The novel three-color mixing technique for visual inspection can be implemented on visual devices for a variety of applications, ranging from target detection to food safety inspection.

  6. Identification of specific malformations of sea urchin larvae for toxicity assessment: application to marine pisciculture effluents.

    PubMed

    Carballeira, C; Ramos-Gómez, J; Martín-Díaz, L; DelValls, T A

    2012-06-01

    Standard toxicity screening tests are useful tools in the management of impacted coastal ecosystems. To our knowledge, this is the first time that the sea urchin embryo development test has been used to evaluate the potential impact of effluents from land-based aquaculture farms in coastal areas. The toxicity of effluents from 8 land-based turbot farms was determined by calculating the percentage of abnormal larvae, according to two criteria: (a) standard, considering as normal pyramid-shaped larvae with differentiated components, and (b) skeletal, a new criterion that considers detailed skeletal characteristics. The skeletal criterion appeared to be more sensitive and enabled calculation of effective concentrations EC(5), EC(10), EC(20) and EC(50), unlike the classical criterion. Inclusion of the skeleton criterion in the sea urchin embryo development test may be useful for categorizing the relatively low toxicity of discharges from land-based marine fish farms. Further studies are encouraged to establish any causative relationships between pollutants and specific larval deformities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. A mathematical programming method for formulating a fuzzy regression model based on distance criterion.

    PubMed

    Chen, Liang-Hsuan; Hsueh, Chan-Ching

    2007-06-01

    Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.

  8. Wavelet subspace decomposition of thermal infrared images for defect detection in artworks

    NASA Astrophysics Data System (ADS)

    Ahmad, M. Z.; Khan, A. A.; Mezghani, S.; Perrin, E.; Mouhoubi, K.; Bodnar, J. L.; Vrabie, V.

    2016-07-01

    Health of ancient artworks must be routinely monitored for their adequate preservation. Faults in these artworks may develop over time and must be identified as precisely as possible. The classical acoustic testing techniques, being invasive, risk causing permanent damage during periodic inspections. Infrared thermometry offers a promising solution to map faults in artworks. It involves heating the artwork and recording its thermal response using infrared camera. A novel strategy based on pseudo-random binary excitation principle is used in this work to suppress the risks associated with prolonged heating. The objective of this work is to develop an automatic scheme for detecting faults in the captured images. An efficient scheme based on wavelet based subspace decomposition is developed which favors identification of, the otherwise invisible, weaker faults. Two major problems addressed in this work are the selection of the optimal wavelet basis and the subspace level selection. A novel criterion based on regional mutual information is proposed for the latter. The approach is successfully tested on a laboratory based sample as well as real artworks. A new contrast enhancement metric is developed to demonstrate the quantitative efficiency of the algorithm. The algorithm is successfully deployed for both laboratory based and real artworks.

  9. Learning redundant motor tasks with and without overlapping dimensions: facilitation and interference effects.

    PubMed

    Ranganathan, Rajiv; Wieser, Jon; Mosier, Kristine M; Mussa-Ivaldi, Ferdinando A; Scheidt, Robert A

    2014-06-11

    Prior learning of a motor skill creates motor memories that can facilitate or interfere with learning of new, but related, motor skills. One hypothesis of motor learning posits that for a sensorimotor task with redundant degrees of freedom, the nervous system learns the geometric structure of the task and improves performance by selectively operating within that task space. We tested this hypothesis by examining if transfer of learning between two tasks depends on shared dimensionality between their respective task spaces. Human participants wore a data glove and learned to manipulate a computer cursor by moving their fingers. Separate groups of participants learned two tasks: a prior task that was unique to each group and a criterion task that was common to all groups. We manipulated the mapping between finger motions and cursor positions in the prior task to define task spaces that either shared or did not share the task space dimensions (x-y axes) of the criterion task. We found that if the prior task shared task dimensions with the criterion task, there was an initial facilitation in criterion task performance. However, if the prior task did not share task dimensions with the criterion task, there was prolonged interference in learning the criterion task due to participants finding inefficient task solutions. These results show that the nervous system learns the task space through practice, and that the degree of shared task space dimensionality influences the extent to which prior experience transfers to subsequent learning of related motor skills. Copyright © 2014 the authors 0270-6474/14/348289-11$15.00/0.

  10. [On the problems of the evolutionary optimization of life history. II. To justification of optimization criterion for nonlinear Leslie model].

    PubMed

    Pasekov, V P

    2013-03-01

    The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.

  11. An intelligent case-adjustment algorithm for the automated design of population-based quality auditing protocols.

    PubMed

    Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A

    2004-01-01

    We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.

  12. Stratiform and Convective Rain Discrimination from Microwave Radiometer Observations

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Cadeddu, M.; Short, D. A.; Weinman, J. A.; Schols, J. L.; Haferman, J.

    1997-01-01

    A criterion based on the SSM/I observations is developed to discriminate rain into convective and stratiform types. This criterion depends on the microwave polarization properties of the flat melting snow particles that fall slowly in the stratiform clouds. Utilizing this criterion and some spatial and temporal characteristics of hydrometeors in TOGA-COARE area revealed by ship borne radars, we have developed an algorithm to retrieve convective and stratiform rain rate from SSM/I data.

  13. Is It Reliable to Take the Molecular Docking Top Scoring Position as the Best Solution without Considering Available Structural Data?

    PubMed

    Ramírez, David; Caballero, Julio

    2018-04-28

    Molecular docking is the most frequently used computational method for studying the interactions between organic molecules and biological macromolecules. In this context, docking allows predicting the preferred pose of a ligand inside a receptor binding site. However, the selection of the “best” solution is not a trivial task, despite the widely accepted selection criterion that the best pose corresponds to the best energy score. Here, several rigid-target docking methods were evaluated on the same dataset with respect to their ability to reproduce crystallographic binding orientations, to test if the best energy score is a reliable criterion for selecting the best solution. For this, two experiments were performed: (A) to reconstruct the ligand-receptor complex by performing docking of the ligand in its own crystal structure receptor (defined as self-docking), and (B) to reconstruct the ligand-receptor complex by performing docking of the ligand in a crystal structure receptor that contains other ligand (defined as cross-docking). Root-mean square deviation (RMSD) was used to evaluate how different the obtained docking orientation is from the corresponding co-crystallized pose of the same ligand molecule. We found that docking score function is capable of predicting crystallographic binding orientations, but the best ranked solution according to the docking energy is not always the pose that reproduces the experimental binding orientation. This happened when self-docking was achieved, but it was critical in cross-docking. Taking into account that docking is typically used with predictive purposes, during cross-docking experiments, our results indicate that the best energy score is not a reliable criterion to select the best solution in common docking applications. It is strongly recommended to choose the best docking solution according to the scoring function along with additional structural criteria described for analogue ligands to assure the selection of a correct docking solution.

  14. The genetic and economic effect of preliminary culling in the seedling orchard

    Treesearch

    Don E. Riemenschneider

    1977-01-01

    The genetic and economic effects of two stages of truncation selection in a white spruce seedling orchard were investigated by computer simulation. Genetic effects were computed by assuming a bivariate distribution of juvenile and mature traits and volume was used as the selection criterion. Seed production was assumed to rise in a linear fashion to maturity and then...

  15. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  16. 47 CFR 73.872 - Selection procedure for mutually exclusive LPFM applications.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... locally at least eight hours of programming per day. For purposes of this criterion, local origination is the production of programming, by the licensee, within ten miles of the coordinates of the proposed...

  17. Optimal experimental designs for fMRI when the model matrix is uncertain.

    PubMed

    Kao, Ming-Hung; Zhou, Lin

    2017-07-15

    This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. [Can Psychometric Tests Predict Success in the Selection Interview for Medical School? A Cross-Sectional Study at One German Medical School].

    PubMed

    Kötter, T; Obst, K U; Brüheim, L; Eisemann, N; Voltmer, E; Katalinic, A

    2017-07-01

    Background The final exam grade is the main selection criterion for medical school application in Germany. For academic success, it seems to be a reliable predictor. Its use as the only selection criterion is, however, criticised. At some universities, personal interviews are part of the selection process. However, these are very time consuming and are of doubtful validity. The (additional) use of appropriate psychometric instruments could reduce the cost and increase the validity. This study investigates the extent to which psychometric instruments can predict the outcome of a personal selection interview. Methods This is a cross-sectional study on the correlation of the results of psychometric instruments with those of the personal selection interview as part of the application process. As the outcome, the score of the selection interview was used. The NEO - Five Factor Inventory, the Hospital Anxiety and Depression Scale (HADS) and the questionnaire to identify work-related behaviour and experience patterns (AVEM) were used as psychometric interviews. Results There was a statistically significant correlation with the results of the personal selection interview for the sum score of the depression scale from the HADS and the sum score for the dimension of life satisfaction of the AVEM. In addition, those participants who did not previously complete an application training achieved a better result in the selection interview. Conclusion The instruments used measure different aspects than the interviews and cannot replace them. It remains to be seen whether the selected parameters are able to predict academic success. © Georg Thieme Verlag KG Stuttgart · New York.

  19. Valuing Equal Protection in Aviation Security Screening.

    PubMed

    Nguyen, Kenneth D; Rosoff, Heather; John, Richard S

    2017-12-01

    The growing number of anti-terrorism policies has elevated public concerns about discrimination. Within the context of airport security screening, the current study examines how American travelers value the principle of equal protection by quantifying the "equity premium" that they are willing to sacrifice to avoid screening procedures that result in differential treatments. In addition, we applied the notion of procedural justice to explore the effect of alternative selective screening procedures on the value of equal protection. Two-hundred and twenty-two respondents were randomly assigned to one of three selective screening procedures: (1) randomly, (2) using behavioral indicators, or (3) based on demographic characteristics. They were asked to choose between airlines using either an equal or a discriminatory screening procedure. While the former requires all passengers to be screened in the same manner, the latter mandates all passengers undergo a quick primary screening and, in addition, some passengers are selected for a secondary screening based on a predetermined selection criterion. Equity premiums were quantified in terms of monetary cost, wait time, convenience, and safety compromise. Results show that equity premiums varied greatly across respondents, with many indicating little willingness to sacrifice to avoid inequitable screening, and a smaller minority willing to sacrifice anything to avoid the discriminatory screening. The selective screening manipulation was effective in that equity premiums were greater under selection by demographic characteristics compared to the other two procedures. © 2017 Society for Risk Analysis.

  20. Urey prize lecture: On the diversity of plausible planetary systems

    NASA Technical Reports Server (NTRS)

    Lissauer, J. J.

    1995-01-01

    Models of planet formation and of the orbital stability of planetary systems are used to predict the variety of planetary and satellite systems that may be present within our galaxy. A new approximate global criterion for orbital stability of planetary systems based on an extension of the local resonance overlap criterion is proposed. This criterion implies that at least some of Uranus' small inner moons are significantly less massive than predicted by estimates based on Voyager volumes and densities assumed to equal that of Miranda. Simple calculations (neglecting planetary gravity) suggest that giant planets which acrete substantial amounts of gas while their envelopes are extremely distended ultimately rotate rapidly in the prgrade direction.

  1. Selection of Applicants for the Air Traffic Controller Occupation,

    DTIC Science & Technology

    1981-07-01

    DEVE[LOPMENT OF PRESENT SELECTION PROCEDURES 2 3. PROBLEM AND RESEARCH OBJECTIVES 4- 4. AIR TFAFF]C CONTROL JOB ANALYSIS 6 5. OVERVIEW OF RESEARCH STU...34at rogue a" CRT Ion paizu"s Vann 10 EL 1972 STUDY - SELECTION OF AIR TRAFFIC CONTROL SPECIALISTS Objectives. The focus of this research was on...Predictor and Criterion Measures of ATC "Success" (1975) for Controllers Participating in 1971 ATC Research Sepa- Progressior+ Present 1971 Supv. Sup

  2. The nexus between geopolitical uncertainty and crude oil markets: An entropy-based wavelet analysis

    NASA Astrophysics Data System (ADS)

    Uddin, Gazi Salah; Bekiros, Stelios; Ahmed, Ali

    2018-04-01

    The global financial crisis and the subsequent geopolitical turbulence in energy markets have brought increased attention to the proper statistical modeling especially of the crude oil markets. In particular, we utilize a time-frequency decomposition approach based on wavelet analysis to explore the inherent dynamics and the casual interrelationships between various types of geopolitical, economic and financial uncertainty indices and oil markets. Via the introduction of a mixed discrete-continuous multiresolution analysis, we employ the entropic criterion for the selection of the optimal decomposition level of a MODWT as well as the continuous-time coherency and phase measures for the detection of business cycle (a)synchronization. Overall, a strong heterogeneity in the revealed interrelationships is detected over time and across scales.

  3. Machine learning for real time remote detection

    NASA Astrophysics Data System (ADS)

    Labbé, Benjamin; Fournier, Jérôme; Henaff, Gilles; Bascle, Bénédicte; Canu, Stéphane

    2010-10-01

    Infrared systems are key to providing enhanced capability to military forces such as automatic control of threats and prevention from air, naval and ground attacks. Key requirements for such a system to produce operational benefits are real-time processing as well as high efficiency in terms of detection and false alarm rate. These are serious issues since the system must deal with a large number of objects and categories to be recognized (small vehicles, armored vehicles, planes, buildings, etc.). Statistical learning based algorithms are promising candidates to meet these requirements when using selected discriminant features and real-time implementation. This paper proposes a new decision architecture benefiting from recent advances in machine learning by using an effective method for level set estimation. While building decision function, the proposed approach performs variable selection based on a discriminative criterion. Moreover, the use of level set makes it possible to manage rejection of unknown or ambiguous objects thus preserving the false alarm rate. Experimental evidences reported on real world infrared images demonstrate the validity of our approach.

  4. Delay-Dependent Stability Criterion for Bidirectional Associative Memory Neural Networks with Interval Time-Varying Delays

    NASA Astrophysics Data System (ADS)

    Park, Ju H.; Kwon, O. M.

    In the letter, the global asymptotic stability of bidirectional associative memory (BAM) neural networks with delays is investigated. The delay is assumed to be time-varying and belongs to a given interval. A novel stability criterion for the stability is presented based on the Lyapunov method. The criterion is represented in terms of linear matrix inequality (LMI), which can be solved easily by various optimization algorithms. Two numerical examples are illustrated to show the effectiveness of our new result.

  5. Selection criteria of residents for residency programs in Kuwait

    PubMed Central

    2013-01-01

    Background In Kuwait, 21 residency training programs were offered in the year 2011; however, no data is available regarding the criteria of selecting residents for these programs. This study aims to provide information about the importance of these criteria. Methods A self-administered questionnaire was used to collect data from members (e.g. chairmen, directors, assistants …etc.) of residency programs in Kuwait. A total of 108 members were invited to participate. They were asked to rate the importance level (scale from 1 to 5) of criteria that may affect the acceptance of an applicant to their residency programs. Average scores were calculated for each criterion. Results Of the 108 members invited to participate, only 12 (11.1%) declined to participate. Interview performance was ranked as the most important criteria for selecting residents (average score: 4.63/5.00), followed by grade point average (average score: 3.78/5.00) and honors during medical school (average score: 3.67/5.00). On the other hand, receiving disciplinary action during medical school and failure in a required clerkship were considered as the most concerning among other criteria used to reject applicants (average scores: 3.83/5.00 and 3.54/5.00 respectively). Minor differences regarding the importance level of each criterion were noted across different programs. Conclusions This study provided general information about the criteria that are used to accept/reject applicants to residency programs in Kuwait. Future studies should be conducted to investigate each criterion individually, and to assess if these criteria are related to residents' success during their training. PMID:23331670

  6. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  7. Criterion-Related Validity of the Distance- and Time-Based Walk/Run Field Tests for Estimating Cardiorespiratory Fitness: A Systematic Review and Meta-Analysis.

    PubMed

    Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús

    2016-01-01

    The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.

  8. Low-flow analysis and selected flow statistics representative of 1930-2002 for streamflow-gaging stations in or near West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2006-01-01

    Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.

  9. A dynamic multiarmed bandit-gene expression programming hyper-heuristic for combinatorial optimization problems.

    PubMed

    Sabar, Nasser R; Ayob, Masri; Kendall, Graham; Qu, Rong

    2015-02-01

    Hyper-heuristics are search methodologies that aim to provide high-quality solutions across a wide variety of problem domains, rather than developing tailor-made methodologies for each problem instance/domain. A traditional hyper-heuristic framework has two levels, namely, the high level strategy (heuristic selection mechanism and the acceptance criterion) and low level heuristics (a set of problem specific heuristics). Due to the different landscape structures of different problem instances, the high level strategy plays an important role in the design of a hyper-heuristic framework. In this paper, we propose a new high level strategy for a hyper-heuristic framework. The proposed high-level strategy utilizes a dynamic multiarmed bandit-extreme value-based reward as an online heuristic selection mechanism to select the appropriate heuristic to be applied at each iteration. In addition, we propose a gene expression programming framework to automatically generate the acceptance criterion for each problem instance, instead of using human-designed criteria. Two well-known, and very different, combinatorial optimization problems, one static (exam timetabling) and one dynamic (dynamic vehicle routing) are used to demonstrate the generality of the proposed framework. Compared with state-of-the-art hyper-heuristics and other bespoke methods, empirical results demonstrate that the proposed framework is able to generalize well across both domains. We obtain competitive, if not better results, when compared to the best known results obtained from other methods that have been presented in the scientific literature. We also compare our approach against the recently released hyper-heuristic competition test suite. We again demonstrate the generality of our approach when we compare against other methods that have utilized the same six benchmark datasets from this test suite.

  10. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization

    NASA Astrophysics Data System (ADS)

    Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki

    Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel.

  11. Which products are available for subsetting?

    Atmospheric Science Data Center

    2014-12-08

    ... users to create smaller files (subsets) of the original data by selecting desired parameters, parameter criterion, or latitude and ... fluxes, where the net flux is constrained to the global heat storage in netCDF format. Single Scanner Footprint TOA/Surface Fluxes ...

  12. Vortex Advisory System Safety Analysis : Volume 1. Analytical Model

    DOT National Transportation Integrated Search

    1978-09-01

    The Vortex Advisory System (VAS) is based on wind criterion--when the wind near the runway end is outside of the criterion, all interarrival Instrument Flight Rules (IFR) aircraft separations can be set at 3 nautical miles. Five years of wind data ha...

  13. Interface Pattern Selection in Directional Solidification

    NASA Technical Reports Server (NTRS)

    Trivedi, Rohit; Tewari, Surendra N.

    2001-01-01

    The central focus of this research is to establish key scientific concepts that govern the selection of cellular and dendritic patterns during the directional solidification of alloys. Ground-based studies have established that the conditions under which cellular and dendritic microstructures form are precisely where convection effects are dominant in bulk samples. Thus, experimental data can not be obtained terrestrially under pure diffusive regime. Furthermore, reliable theoretical models are not yet possible which can quantitatively incorporate fluid flow in the pattern selection criterion. Consequently, microgravity experiments on cellular and dendritic growth are designed to obtain benchmark data under diffusive growth conditions that can be quantitatively analyzed and compared with the rigorous theoretical model to establish the fundamental principles that govern the selection of specific microstructure and its length scales. In the cellular structure, different cells in an array are strongly coupled so that the cellular pattern evolution is controlled by complex interactions between thermal diffusion, solute diffusion and interface effects. These interactions give infinity of solutions, and the system selects only a narrow band of solutions. The aim of this investigation is to obtain benchmark data and develop a rigorous theoretical model that will allow us to quantitatively establish the physics of this selection process.

  14. Method for wiring allocation and switch configuration in a multiprocessor environment

    DOEpatents

    Aridor, Yariv [Zichron Ya'akov, IL; Domany, Tamar [Kiryat Tivon, IL; Frachtenberg, Eitan [Jerusalem, IL; Gal, Yoav [Haifa, IL; Shmueli, Edi [Haifa, IL; Stockmeyer, legal representative, Robert E.; Stockmeyer, Larry Joseph [San Jose, CA

    2008-07-15

    A method for wiring allocation and switch configuration in a multiprocessor computer, the method including employing depth-first tree traversal to determine a plurality of paths among a plurality of processing elements allocated to a job along a plurality of switches and wires in a plurality of D-lines, and selecting one of the paths in accordance with at least one selection criterion.

  15. Comparison of Sensor Selection Mechanisms for an ERP-Based Brain-Computer Interface

    PubMed Central

    Metzen, Jan H.

    2013-01-01

    A major barrier for a broad applicability of brain-computer interfaces (BCIs) based on electroencephalography (EEG) is the large number of EEG sensor electrodes typically used. The necessity for this results from the fact that the relevant information for the BCI is often spread over the scalp in complex patterns that differ depending on subjects and application scenarios. Recently, a number of methods have been proposed to determine an individual optimal sensor selection. These methods have, however, rarely been compared against each other or against any type of baseline. In this paper, we review several selection approaches and propose one additional selection criterion based on the evaluation of the performance of a BCI system using a reduced set of sensors. We evaluate the methods in the context of a passive BCI system that is designed to detect a P300 event-related potential and compare the performance of the methods against randomly generated sensor constellations. For a realistic estimation of the reduced system's performance we transfer sensor constellations found on one experimental session to a different session for evaluation. We identified notable (and unanticipated) differences among the methods and could demonstrate that the best method in our setup is able to reduce the required number of sensors considerably. Though our application focuses on EEG data, all presented algorithms and evaluation schemes can be transferred to any binary classification task on sensor arrays. PMID:23844021

  16. Identifying Epigenetic Biomarkers using Maximal Relevance and Minimal Redundancy Based Feature Selection for Multi-Omics Data.

    PubMed

    Mallik, Saurav; Bhadra, Tapas; Maulik, Ujjwal

    2017-01-01

    Epigenetic Biomarker discovery is an important task in bioinformatics. In this article, we develop a new framework of identifying statistically significant epigenetic biomarkers using maximal-relevance and minimal-redundancy criterion based feature (gene) selection for multi-omics dataset. Firstly, we determine the genes that have both expression as well as methylation values, and follow normal distribution. Similarly, we identify the genes which consist of both expression and methylation values, but do not follow normal distribution. For each case, we utilize a gene-selection method that provides maximal-relevant, but variable-weighted minimum-redundant genes as top ranked genes. For statistical validation, we apply t-test on both the expression and methylation data consisting of only the normally distributed top ranked genes to determine how many of them are both differentially expressed andmethylated. Similarly, we utilize Limma package for performing non-parametric Empirical Bayes test on both expression and methylation data comprising only the non-normally distributed top ranked genes to identify how many of them are both differentially expressed and methylated. We finally report the top-ranking significant gene-markerswith biological validation. Moreover, our framework improves positive predictive rate and reduces false positive rate in marker identification. In addition, we provide a comparative analysis of our gene-selection method as well as othermethods based on classificationperformances obtained using several well-known classifiers.

  17. Pathological diagnostic criterion of blood and lymphatic vessel invasion in colorectal cancer: a framework for developing an objective pathological diagnostic system using the Delphi method, from the Pathology Working Group of the Japanese Society for Cancer of the Colon and Rectum.

    PubMed

    Kojima, Motohiro; Shimazaki, Hideyuki; Iwaya, Keiichi; Kage, Masayoshi; Akiba, Jun; Ohkura, Yasuo; Horiguchi, Shinichiro; Shomori, Kohei; Kushima, Ryoji; Ajioka, Yoichi; Nomura, Shogo; Ochiai, Atsushi

    2013-07-01

    The goal of this study is to create an objective pathological diagnostic system for blood and lymphatic vessel invasion (BLI). 1450 surgically resected colorectal cancer specimens from eight hospitals were reviewed. Our first step was to compare the current practice of pathology assessment among eight hospitals. Then, H&E stained slides with or without histochemical/immunohistochemical staining were assessed by eight pathologists and concordance of BLI diagnosis was checked. In addition, histological findings associated with BLI having good concordance were reviewed. Based on these results, framework for developing diagnostic criterion was developed, using the Delphi method. The new criterion was evaluated using 40 colorectal cancer specimens. Frequency of BLI diagnoses, number of blocks obtained and stained for assessment of BLI varied among eight hospitals. Concordance was low for BLI diagnosis and was not any better when histochemical/immunohistochemical staining was provided. All histological findings associated with BLI from H&E staining were poor in agreement. However, observation of elastica-stained internal elastic membrane covering more than half of the circumference surrounding the tumour cluster as well as the presence of D2-40-stained endothelial cells covering more than half of the circumference surrounding the tumour cluster showed high concordance. Based on this observation, we developed a framework for pathological diagnostic criterion, using the Delphi method. This criterion was found to be useful in improving concordance of BLI diagnosis. A framework for pathological diagnostic criterion was developed by reviewing concordance and using the Delphi method. The criterion developed may serve as the basis for creating a standardised procedure for pathological diagnosis.

  18. Pathological diagnostic criterion of blood and lymphatic vessel invasion in colorectal cancer: a framework for developing an objective pathological diagnostic system using the Delphi method, from the Pathology Working Group of the Japanese Society for Cancer of the Colon and Rectum

    PubMed Central

    Kojima, Motohiro; Shimazaki, Hideyuki; Iwaya, Keiichi; Kage, Masayoshi; Akiba, Jun; Ohkura, Yasuo; Horiguchi, Shinichiro; Shomori, Kohei; Kushima, Ryoji; Ajioka, Yoichi; Nomura, Shogo; Ochiai, Atsushi

    2013-01-01

    Aims The goal of this study is to create an objective pathological diagnostic system for blood and lymphatic vessel invasion (BLI). Methods 1450 surgically resected colorectal cancer specimens from eight hospitals were reviewed. Our first step was to compare the current practice of pathology assessment among eight hospitals. Then, H&E stained slides with or without histochemical/immunohistochemical staining were assessed by eight pathologists and concordance of BLI diagnosis was checked. In addition, histological findings associated with BLI having good concordance were reviewed. Based on these results, framework for developing diagnostic criterion was developed, using the Delphi method. The new criterion was evaluated using 40 colorectal cancer specimens. Results Frequency of BLI diagnoses, number of blocks obtained and stained for assessment of BLI varied among eight hospitals. Concordance was low for BLI diagnosis and was not any better when histochemical/immunohistochemical staining was provided. All histological findings associated with BLI from H&E staining were poor in agreement. However, observation of elastica-stained internal elastic membrane covering more than half of the circumference surrounding the tumour cluster as well as the presence of D2-40-stained endothelial cells covering more than half of the circumference surrounding the tumour cluster showed high concordance. Based on this observation, we developed a framework for pathological diagnostic criterion, using the Delphi method. This criterion was found to be useful in improving concordance of BLI diagnosis. Conclusions A framework for pathological diagnostic criterion was developed by reviewing concordance and using the Delphi method. The criterion developed may serve as the basis for creating a standardised procedure for pathological diagnosis. PMID:23592799

  19. Differential prioritization between relevance and redundancy in correlation-based feature selection techniques for multiclass gene expression data.

    PubMed

    Ooi, Chia Huey; Chetty, Madhu; Teng, Shyh Wei

    2006-06-23

    Due to the large number of genes in a typical microarray dataset, feature selection looks set to play an important role in reducing noise and computational cost in gene expression-based tissue classification while improving accuracy at the same time. Surprisingly, this does not appear to be the case for all multiclass microarray datasets. The reason is that many feature selection techniques applied on microarray datasets are either rank-based and hence do not take into account correlations between genes, or are wrapper-based, which require high computational cost, and often yield difficult-to-reproduce results. In studies where correlations between genes are considered, attempts to establish the merit of the proposed techniques are hampered by evaluation procedures which are less than meticulous, resulting in overly optimistic estimates of accuracy. We present two realistically evaluated correlation-based feature selection techniques which incorporate, in addition to the two existing criteria involved in forming a predictor set (relevance and redundancy), a third criterion called the degree of differential prioritization (DDP). DDP functions as a parameter to strike the balance between relevance and redundancy, providing our techniques with the novel ability to differentially prioritize the optimization of relevance against redundancy (and vice versa). This ability proves useful in producing optimal classification accuracy while using reasonably small predictor set sizes for nine well-known multiclass microarray datasets. For multiclass microarray datasets, especially the GCM and NCI60 datasets, DDP enables our filter-based techniques to produce accuracies better than those reported in previous studies which employed similarly realistic evaluation procedures.

  20. A new frequency matching technique for FRF-based model updating

    NASA Astrophysics Data System (ADS)

    Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng

    2017-05-01

    Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.

  1. Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion

    NASA Astrophysics Data System (ADS)

    Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.

    2017-09-01

    Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.

  2. Prediction of fracture initiation in square cup drawing of DP980 using an anisotropic ductile fracture criterion

    NASA Astrophysics Data System (ADS)

    Park, N.; Huh, H.; Yoon, J. W.

    2017-09-01

    This paper deals with the prediction of fracture initiation in square cup drawing of DP980 steel sheet with the thickness of 1.2 mm. In an attempt to consider the influence of material anisotropy on the fracture initiation, an uncoupled anisotropic ductile fracture criterion is developed based on the Lou—Huh ductile fracture criterion. Tensile tests are carried out at different loading directions of 0°, 45°, and 90° to the rolling direction of the sheet using various specimen geometries including pure shear, dog-bone, and flat grooved specimens so as to calibrate the parameters of the proposed fracture criterion. Equivalent plastic strain distribution on the specimen surface is computed using Digital Image Correlation (DIC) method until surface crack initiates. The proposed fracture criterion is implemented into the commercial finite element code ABAQUS/Explicit by developing the Vectorized User-defined MATerial (VUMAT) subroutine which features the non-associated flow rule. Simulation results of the square cup drawing test clearly show that the proposed fracture criterion is capable of predicting the fracture initiation with sufficient accuracy considering the material anisotropy.

  3. Office-based ultrasound screening for abdominal aortic aneurysm

    PubMed Central

    Blois, Beau

    2012-01-01

    Abstract Objective To assess the efficacy of an office-based, family physician–administered ultrasound examination to screen for abdominal aortic aneurysm (AAA). Design A prospective observational study. Consecutive patients were approached by nonphysician staff. Setting Rural family physician offices in Grand Forks and Revelstoke, BC. Participants The Canadian Society for Vascular Surgery screening recommendations for AAA were used to help select patients who were at risk of AAA. All men 65 years of age or older were included. Women 65 years of age or older were included if they were current smokers or had diabetes, hypertension, a history of coronary artery disease, or a family history of AAA. Main outcome measures A focused “quick screen,” which measured the maximal diameter of the abdominal aorta using point-of-care ultrasound technology, was performed in the office by a resident physician trained in emergency ultrasonography. Each patient was then booked for a criterion standard scan (ie, a conventional abdominal ultrasound scan performed by a technician and interpreted by a radiologist). The maximal abdominal aortic diameter measured by ultrasound in the office was compared with that measured by the criterion standard method. The time to screen each patient was recorded. Results Forty-five patients were included in data analysis; 62% of participants were men. The mean age was 73 years. The mean pairwise difference between the office-based ultrasound scan and the criterion standard scan was not statistically significant. The mean absolute difference between the 2 scans was 0.20 cm (95% CI 0.15 to 0.25 cm). Correlation between the scans was 0.81. The office-based ultrasound scan had both a sensitivity and a specificity of 100%. The mean time to screen each patient was 212 seconds (95% CI 194 to 230 seconds). Conclusion Abdominal aortic aneurysm screening can be safely performed in the office by family physicians who are trained to use point-of-care ultrasound technology. The screening test can be completed within the time constraints of a busy family practice office visit. The benefit of screening for AAA in rural patients might be great if local diagnostic ultrasound service and emergent transport to a vascular surgeon are not available. PMID:22518906

  4. Office-based ultrasound screening for abdominal aortic aneurysm.

    PubMed

    Blois, Beau

    2012-03-01

    To assess the efficacy of an office-based, family physician–administered ultrasound examination to screen for abdominal aortic aneurysm (AAA). A prospective observational study. Consecutive patients were approached by nonphysician staff. Rural family physician offices in Grand Forks and Revelstoke, BC. The Canadian Society for Vascular Surgery screening recommendations for AAA were used to help select patients who were at risk of AAA. All men 65 years of age or older were included. Women 65 years of age or older were included if they were current smokers or had diabetes, hypertension, a history of coronary artery disease, or a family history of AAA. A focused “quick screen”, which measured the maximal diameter of the abdominal aorta using point-of-care ultrasound technology, was performed in the office by a resident physician trained in emergency ultrasonography. Each patient was then booked for a criterion standard scan (i.e., a conventional abdominal ultrasound scan performed by a technician and interpreted by a radiologist). The maximal abdominal aortic diameter measured by ultrasound in the office was compared with that measured by the criterion standard method. The time to screen each patient was recorded. Forty-five patients were included in data analysis; 62% of participants were men. The mean age was 73 years. The mean pairwise difference between the office-based ultrasound scan and the criterion standard scan was not statistically significant. The mean absolute difference between the 2 scans was 0.20 cm (95% CI 0.15 to 0.25 cm). Correlation between the scans was 0.81. The office-based ultrasound scan had both a sensitivity and a specificity of 100%. The mean time to screen each patient was 212 seconds (95% CI 194 to 230 seconds). Abdominal aortic aneurysm screening can be safely performed in the office by family physicians who are trained to use point-of- care ultrasound technology. The screening test can be completed within the time constraints of a busy family practice office visit. The benefit of screening for AAA in rural patients might be great if local diagnostic ultrasound service and emergent transport to a vascular surgeon are not available.

  5. Criterion-Referenced Test Items for Welding.

    ERIC Educational Resources Information Center

    Davis, Diane, Ed.

    This test item bank on welding contains test questions based upon competencies found in the Missouri Welding Competency Profile. Some test items are keyed for multiple competencies. These criterion-referenced test items are designed to work with the Vocational Instructional Management System. Questions have been statistically sampled and validated…

  6. A hybrid gene selection approach for microarray data classification using cellular learning automata and ant colony optimization.

    PubMed

    Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein

    2016-06-01

    This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Why is auditory frequency weighting so important in regulation of underwater noise?

    PubMed

    Tougaard, Jakob; Dähne, Michael

    2017-10-01

    A key question related to regulating noise from pile driving, air guns, and sonars is how to take into account the hearing abilities of different animals by means of auditory frequency weighting. Recordings of pile driving sounds, both in the presence and absence of a bubble curtain, were evaluated against recent thresholds for temporary threshold shift (TTS) for harbor porpoises by means of four different weighting functions. The assessed effectivity, expressed as time until TTS, depended strongly on choice of weighting function: 2 orders of magnitude larger for an audiogram-weighted TTS criterion relative to an unweighted criterion, highlighting the importance of selecting the right frequency weighting.

  8. [A new method of investigation of "child's" behavior (infant-mother attachment) of newborn rats].

    PubMed

    Stovolosov, I S; Dubynin, V A; Kamenskiĭ, A A

    2010-01-01

    A new method of studying "child's" (maternal bonding) behavior of newborn rats was developed. The efficiency of the method was proved in estimation of dopaminergic control of the infant-mother attachment. Selective D2-antagonist clebopride applied in subthreshold for motor activity doses caused a decrease in aspiration of pups to be in contact with a dam. On the basis of features analyzed (latent periods and expression of various behavioral components), the integrated criterion for the estimation of "child's" reactions was suggested. Application of this criterion made it possible to neutralize high individual variability of the behavior typical of newborns.

  9. A computerized scheme of SARS detection in early stage based on chest image of digital radiograph

    NASA Astrophysics Data System (ADS)

    Zheng, Zhong; Lan, Rihui; Lv, Guozheng

    2004-05-01

    A computerized scheme for early severe acute respiratory syndrome(SARS) lesion detection in digital chest radiographs is presented in this paper. The total scheme consists of two main parts: the first part is to determine suspect lesions by the theory of locally orderless images(LOI) and their spatial features; the second part is to select real lesions among these suspect ones by their frequent features. The method we used in the second part is firstly developed by Katsuragawa et al with necessary modification. Preliminary results indicate that these features are good criterions to tell early SARS lesions apart from other normal lung structures.

  10. Evaluation of optimal control type models for the human gunner in an Anti-Aircraft Artillery (AAA) system

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Kessler, K. M.

    1975-01-01

    The selection of the structure of optimal control type models for the human gunner in an anti aircraft artillery system is considered. Several structures within the LQG framework may be formulated. Two basic types are considered: (1) kth derivative controllers; and (2) proportional integral derivative (P-I-D) controllers. It is shown that a suitable criterion for model structure determination can be based on the ensemble statistics of the tracking error. In the case when the ensemble tracking steady state error is zero, it is suggested that a P-I-D controller formulation be used in preference to the kth derivative controller.

  11. Developing disease resistance in CP-Cultivars

    USDA-ARS?s Scientific Manuscript database

    Disease resistance is an important selection criterion in the Canal Point (CP) Sugarcane Cultivar Development Program. Ratoon stunt (RSD, caused by Leifsonia xyli subsp. Xyli Evtsuhenko et al.), leaf scald (caused by Xanthomonas albilineans Ashby, Dowson), mosaic (caused by Sugarcane mosaic virus st...

  12. The development and validity of the Salford Gait Tool: an observation-based clinical gait assessment tool.

    PubMed

    Toro, Brigitte; Nester, Christopher J; Farren, Pauline C

    2007-03-01

    To develop the construct, content, and criterion validity of the Salford Gait Tool (SF-GT) and to evaluate agreement between gait observations using the SF-GT and kinematic gait data. Tool development and comparative evaluation. University in the United Kingdom. For designing construct and content validity, convenience samples of 10 children with hemiplegic, diplegic, and quadriplegic cerebral palsy (CP) and 152 physical therapy students and 4 physical therapists were recruited. For developing criterion validity, kinematic gait data of 13 gait clusters containing 56 children with hemiplegic, diplegic, and quadriplegic CP and 11 neurologically intact children was used. For clinical evaluation, a convenience sample of 23 pediatric physical therapists participated. We developed a sagittal plane observational gait assessment tool through a series of design, test, and redesign iterations. The tool's grading system was calibrated using kinematic gait data of 13 gait clusters and was evaluated by comparing the agreement of gait observations using the SF-GT with kinematic gait data. Criterion standard kinematic gait data. There was 58% mean agreement based on grading categories and 80% mean agreement based on degree estimations evaluated with the least significant difference method. The new SF-GT has good concurrent criterion validity.

  13. Discriminating Talent Identified Junior Australian Footballers Using a Fundamental Gross Athletic Movement Assessment.

    PubMed

    Woods, Carl T; Banyard, Harry G; McKeown, Ian; Fransen, Job; Robertson, Sam

    2016-09-01

    Talent identification (TID) is a pertinent component of the sports sciences, affording practitioners the opportunity to target developmental interventions to a select few; optimising financial investments. However, TID is multi-componential, requiring the recognition of immediate and prospective performance. The measurement of athletic movement skill may afford practitioners insight into the latter component given its augmented relationship with functional sport specific qualities. It is currently unknown whether athletic movement skill is a discriminant quality in junior Australian football (AF). This study aimed to discriminate talent identified junior AF players from their non-talent identified counterparts using a fundamental gross athletic movement assessment. From a total of 50 under 18 (U18) AF players; two groups were classified a priori based on selection level; talent identified (n = 25; state academy representatives) and non-talent identified (n = 25; state-based competition representatives). Players performed a fundamental gross athletic movement assessment based on the Athletic Ability Assessment (AAA), consisting of an overhead squat, double lunge (left and right legs), single leg Romanian deadlift (left and right legs), and a push up (six movement criterions). Movements were scored across three assessment points using a three-point scale (resulting in a possible score of nine for each movement). A multivariate analysis of variance revealed significant between group effects on four of the six movement criterions (d = 0.56 - 0.87; p = 0.01 - 0.02). Binary logistic regression models and a receiver operating characteristic curve inspection revealed that the overhead squat score provided the greatest group discrimination (β(SE) = -0.89(0.44); p < 0.05), with a score of 4.5 classifying 64% and 88% of the talent identified and non-talent identified groups, respectively. Results support the integration of this assessment into contemporary talent identification approaches in junior AF, as it may provide coaches with insight into a juniors developmental potential.

  14. Fatigue behavior of thin-walled grade 2 titanium samples processed by selective laser melting. Application to life prediction of porous titanium implants.

    PubMed

    Lipinski, P; Barbas, A; Bonnet, A-S

    2013-12-01

    Because of its biocompatibility and high mechanical properties, the commercially pure grade 2 titanium (CPG2Ti) is largely used for fabrication of patient specific implants or hard tissue substitutes with complex shape. To avoid the stress-shielding and help their colonization by bone, prostheses with a controlled porosity are designed. The selective laser melting (SLM) is well adapted to manufacture such geometrically complicated structures constituted by struts with rough surfaces and relatively small diameters. Few studies were dedicated to characterize the fatigue properties of SLM processed samples and bulk parts. They followed conventional or standard protocols. The fatigue behavior of standard samples is very different from the one of porous raw structures. In this study, the SLM made "as built" (AB) and "heat treated" (HT) tubular samples were tested in fatigue. Wöhler curves were determined in both cases. The obtained endurance limits were equal to σD(AB)=74.5MPa and σD(HT)=65.7MPa, respectively. The heat treatment worsened the endurance limit by relaxation of negative residual stresses measured on the external surface of the samples. Modified Goodman diagram was established for raw specimens. Porous samples, based on the pattern developed by Barbas et al. (2012), were manufactured by SLM. Fatigue tests and finite element simulations performed on these samples enabled the determination of a simple rule of fatigue assessment. The method based on the stress gradient appeared as the best approach to take into account the notch influence on the fatigue life of CPG2Ti structures with a controlled porosity. The direction dependent apparent fatigue strength was found. A criterion based on the effective, or global, nominal stress was proposed taking into account the anisotropy of the porous structures. Thanks to this criterion, the usual calculation methods can be used to design bone substitutes, without a precise modelling of their internal fine porosity. © 2013 Elsevier Ltd. All rights reserved.

  15. A Joint Optimization Criterion for Blind DS-CDMA Detection

    NASA Astrophysics Data System (ADS)

    Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.

    2006-12-01

    This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.

  16. Computation of Anisotropic Bi-Material Interfacial Fracture Parameters and Delamination Creteria

    NASA Technical Reports Server (NTRS)

    Chow, W-T.; Wang, L.; Atluri, S. N.

    1998-01-01

    This report documents the recent developments in methodologies for the evaluation of the integrity and durability of composite structures, including i) the establishment of a stress-intensity-factor based fracture criterion for bimaterial interfacial cracks in anisotropic materials (see Sec. 2); ii) the development of a virtual crack closure integral method for the evaluation of the mixed-mode stress intensity factors for a bimaterial interfacial crack (see Sec. 3). Analytical and numerical results show that the proposed fracture criterion is a better fracture criterion than the total energy release rate criterion in the characterization of the bimaterial interfacial cracks. The proposed virtual crack closure integral method is an efficient and accurate numerical method for the evaluation of mixed-mode stress intensity factors.

  17. Revision of the criterion to avoid electron heating during laser aided plasma diagnostics (LAPD)

    NASA Astrophysics Data System (ADS)

    Carbone, E. A. D.; Palomares, J. M.; Hübner, S.; Iordanova, E.; van der Mullen, J. J. A. M.

    2012-01-01

    A criterion is given for the laser fluency (in J/m2) such that, when satisfied, disturbance of the plasma by the laser is avoided. This criterion accounts for laser heating of the electron gas intermediated by electron-ion (ei) and electron-atom (ea) interactions. The first heating mechanism is well known and was extensively dealt with in the past. The second is often overlooked but of importance for plasmas of low degree of ionization. It is especially important for cold atmospheric plasmas, plasmas that nowadays stand in the focus of attention. The new criterion, based on the concerted action of both ei and ea interactions is validated by Thomson scattering experiments performed on four different plasmas.

  18. Robust Criterion for the Existence of Nonhyperbolic Ergodic Measures

    NASA Astrophysics Data System (ADS)

    Bochi, Jairo; Bonatti, Christian; Díaz, Lorenzo J.

    2016-06-01

    We give explicit C 1-open conditions that ensure that a diffeomorphism possesses a nonhyperbolic ergodic measure with positive entropy. Actually, our criterion provides the existence of a partially hyperbolic compact set with one-dimensional center and positive topological entropy on which the center Lyapunov exponent vanishes uniformly. The conditions of the criterion are met on a C 1-dense and open subset of the set of diffeomorphisms having a robust cycle. As a corollary, there exists a C 1-open and dense subset of the set of non-Anosov robustly transitive diffeomorphisms consisting of systems with nonhyperbolic ergodic measures with positive entropy. The criterion is based on a notion of a blender defined dynamically in terms of strict invariance of a family of discs.

  19. da Vinci skills simulator for assessing learning curve and criterion-based training of robotic basic skills.

    PubMed

    Brinkman, Willem M; Luursema, Jan-Maarten; Kengen, Bas; Schout, Barbara M A; Witjes, J Alfred; Bekkers, Ruud L

    2013-03-01

    To answer 2 research questions: what are the learning curve patterns of novices on the da Vinci skills simulator parameters and what parameters are appropriate for criterion-based robotic training. A total of 17 novices completed 2 simulator sessions within 3 days. Each training session consisted of a warming-up exercise, followed by 5 repetitions of the "ring and rail II" task. Expert participants (n = 3) performed a warming-up exercise and 3 repetitions of the "ring and rail II" task on 1 day. We analyzed all 9 parameters of the simulator. Significant learning occurred on 5 parameters: overall score, time to complete, instrument collision, instruments out of view, and critical errors within 1-10 repetitions (P <.05). Economy of motion and excessive instrument force only showed improvement within the first 5 repetitions. No significant learning on the parameter drops and master workspace range was found. Using the expert overall performance score (n = 3) as a criterion (overall score 90%), 9 of 17 novice participants met the criterion within 10 repetitions. Most parameters showed that basic robotic skills are learned relatively quickly using the da Vinci skills simulator, but that 10 repetitions were not sufficient for most novices to reach an expert level. Some parameters seemed inappropriate for expert-based criterion training because either no learning occurred or the novice performance was equal to expert performance. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Robust online tracking via adaptive samples selection with saliency detection

    NASA Astrophysics Data System (ADS)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  1. Did My M.D. Really Go to University to Learn? Detrimental Effects of Numerus Clausus on Self-Efficacy, Mastery Goals and Learning

    PubMed Central

    Sommet, Nicolas; Pulfrey, Caroline; Butera, Fabrizio

    2013-01-01

    Exams with numerus clausus are very common in Medicine, Business Administration and Law. They are intended to select a predefined number of academic candidates on the basis of their rank rather than their absolute performance. Various scholars and politicians believe that numerus clausus policies are a vector of academic excellence. We argue, however, that they could have ironic epistemic effects. In comparison with selective policies based on criterion-based evaluations, selection via numerus clausus creates negative interdependence of competence (i.e., the success of some students comes at the expense of the others). Thus, we expect it to impair students’ sense of self-efficacy and—by extension—the level of mastery goals they adopt, as well as their actual learning. Two field studies respectively reported that presence (versus absence) and awareness (versus ignorance) of numerus clausus policies at University was associated with a decreased endorsement of mastery goals; this effect was mediated by a reduction in self-efficacy beliefs. Moreover, an experimental study revealed that numerus clausus negatively predicted learning; this effect was, again, mediated by a reduction in self-efficacy beliefs. Practical implications for the selection procedures in higher education are discussed. PMID:24376794

  2. Distributed polar-coded OFDM based on Plotkin's construction for half duplex wireless communication

    NASA Astrophysics Data System (ADS)

    Umar, Rahim; Yang, Fengfan; Mughal, Shoaib; Xu, HongJun

    2018-07-01

    A Plotkin-based polar-coded orthogonal frequency division multiplexing (P-PC-OFDM) scheme is proposed and its bit error rate (BER) performance over additive white gaussian noise (AWGN), frequency selective Rayleigh, Rician and Nakagami-m fading channels has been evaluated. The considered Plotkin's construction possesses a parallel split in its structure, which motivated us to extend the proposed P-PC-OFDM scheme in a coded cooperative scenario. As the relay's effective collaboration has always been pivotal in the design of cooperative communication therefore, an efficient selection criterion for choosing the information bits has been inculcated at the relay node. To assess the BER performance of the proposed cooperative scheme, we have also upgraded conventional polar-coded cooperative scheme in the context of OFDM as an appropriate bench marker. The Monte Carlo simulated results revealed that the proposed Plotkin-based polar-coded cooperative OFDM scheme convincingly outperforms the conventional polar-coded cooperative OFDM scheme by 0.5 0.6 dBs over AWGN channel. This prominent gain in BER performance is made possible due to the bit-selection criteria and the joint successive cancellation decoding adopted at the relay and the destination nodes, respectively. Furthermore, the proposed coded cooperative schemes outperform their corresponding non-cooperative schemes by a gain of 1 dB under an identical condition.

  3. Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation

    PubMed Central

    Maji, Pradipta; Roy, Shaswati

    2015-01-01

    Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961

  4. A method for tailoring the information content of a software process model

    NASA Technical Reports Server (NTRS)

    Perkins, Sharon; Arend, Mark B.

    1990-01-01

    The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.

  5. A method for tailoring the information content of a software process model

    NASA Technical Reports Server (NTRS)

    Perkins, Sharon; Arend, Mark B.

    1990-01-01

    The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.

  6. Selection of important ecological source patches base on Green Infrastructure theory: A case study of Wuhan city

    NASA Astrophysics Data System (ADS)

    Ke, Yuanyuan; Yu, Yan; Tong, Yan

    2018-01-01

    Selecting urban ecological patches is of great significance for constructing urban green infrastructure network, protecting urban biodiversity and ecological environment. With the support of GIS technology, a criterion for selecting sources of patches was developed according to existing planning. Then ecological source patches of terrestrial organism, aquatic and amphibious organism were selected in Wuhan city. To increase the connectivity of the ecological patches and achieve greater ecological protection benefits, the green infrastructure networks in Wuhan city were constructed with the minimum path analysis method. Finally, the characteristics of ecological source patches were analyzed with landscape metrics, and ecological protection importance degree of ecological source patches were evaluated comprehensively. The results showed that there were 23 important ecological source patches in Wuhan city, among which Sushan Temple Forest Patch, Lu Lake and Shangshe Lake Wetland Patch were the most important in all kinds of patches for ecological protection. This study can provide a scientific basis for the preservation of urban ecological space, the delineation of natural conservation areas and the protection of biological diversity.

  7. Fast and Accurate Construction of Ultra-Dense Consensus Genetic Maps Using Evolution Strategy Optimization

    PubMed Central

    Mester, David; Ronin, Yefim; Schnable, Patrick; Aluru, Srinivas; Korol, Abraham

    2015-01-01

    Our aim was to develop a fast and accurate algorithm for constructing consensus genetic maps for chip-based SNP genotyping data with a high proportion of shared markers between mapping populations. Chip-based genotyping of SNP markers allows producing high-density genetic maps with a relatively standardized set of marker loci for different mapping populations. The availability of a standard high-throughput mapping platform simplifies consensus analysis by ignoring unique markers at the stage of consensus mapping thereby reducing mathematical complicity of the problem and in turn analyzing bigger size mapping data using global optimization criteria instead of local ones. Our three-phase analytical scheme includes automatic selection of ~100-300 of the most informative (resolvable by recombination) markers per linkage group, building a stable skeletal marker order for each data set and its verification using jackknife re-sampling, and consensus mapping analysis based on global optimization criterion. A novel Evolution Strategy optimization algorithm with a global optimization criterion presented in this paper is able to generate high quality, ultra-dense consensus maps, with many thousands of markers per genome. This algorithm utilizes "potentially good orders" in the initial solution and in the new mutation procedures that generate trial solutions, enabling to obtain a consensus order in reasonable time. The developed algorithm, tested on a wide range of simulated data and real world data (Arabidopsis), outperformed two tested state-of-the-art algorithms by mapping accuracy and computation time. PMID:25867943

  8. A novel tree-based procedure for deciphering the genomic spectrum of clinical disease entities.

    PubMed

    Mbogning, Cyprien; Perdry, Hervé; Toussile, Wilson; Broët, Philippe

    2014-01-01

    Dissecting the genomic spectrum of clinical disease entities is a challenging task. Recursive partitioning (or classification trees) methods provide powerful tools for exploring complex interplay among genomic factors, with respect to a main factor, that can reveal hidden genomic patterns. To take confounding variables into account, the partially linear tree-based regression (PLTR) model has been recently published. It combines regression models and tree-based methodology. It is however computationally burdensome and not well suited for situations for which a large number of exploratory variables is expected. We developed a novel procedure that represents an alternative to the original PLTR procedure, and considered different selection criteria. A simulation study with different scenarios has been performed to compare the performances of the proposed procedure to the original PLTR strategy. The proposed procedure with a Bayesian Information Criterion (BIC) achieved good performances to detect the hidden structure as compared to the original procedure. The novel procedure was used for analyzing patterns of copy-number alterations in lung adenocarcinomas, with respect to Kirsten Rat Sarcoma Viral Oncogene Homolog gene (KRAS) mutation status, while controlling for a cohort effect. Results highlight two subgroups of pure or nearly pure wild-type KRAS tumors with particular copy-number alteration patterns. The proposed procedure with a BIC criterion represents a powerful and practical alternative to the original procedure. Our procedure performs well in a general framework and is simple to implement.

  9. A Thomistic defense of whole-brain death

    PubMed Central

    Eberl, Jason T.

    2015-01-01

    Michel Accad critiques the currently accepted whole-brain criterion for determining the death of a human being from a Thomistic metaphysical perspective and, in so doing, raises objections to a particular argument defending the whole-brain criterion by Patrick Lee and Germain Grisez. In this paper, I will respond to Accad's critique of the whole-brain criterion and defend its continued validity as a criterion for determining when a human being's death has occurred in accord with Thomistic metaphysical principles. I will, however, join Accad in criticizing Lee and Grisez's proposed defense of the whole-brain criterion as potentially leading to erroneous conclusions regarding the determination of human death. Lay summary: Catholic physicians and bioethicists currently debate the legally accepted clinical standard for determining when a human being has died—known as the “wholebrain criterion”—which has also been morally affirmed by the Magisterium. This paper responds to physician Michel Accad’s critique of the whole-brain criterion based upon St. Thomas Aquinas’s metaphysical account of human nature as a union of a rational soul and a material body. I defend the whole-brain criterion from the same Thomistic philosophical perspective, while agreeing with Accad’s objection to an alternative Thomistic defense of whole-brain death by philosophers Patrick Lee and Germain Grisez. PMID:26912933

  10. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  11. Criteria to Evaluate Interpretive Guides for Criterion-Referenced Tests

    ERIC Educational Resources Information Center

    Trapp, William J.

    2007-01-01

    This project provides a list of criteria for which the contents of interpretive guides written for customized, criterion-referenced tests can be evaluated. The criteria are based on the "Standards for Educational and Psychological Testing" (1999) and examine the content breadth of interpretive guides. Interpretive guides written for…

  12. Criterion-Referenced Test Items for Small Engines.

    ERIC Educational Resources Information Center

    Herd, Amon

    This notebook contains criterion-referenced test items for testing students' knowledge of small engines. The test items are based upon competencies found in the Missouri Small Engine Competency Profile. The test item bank is organized in 18 sections that cover the following duties: shop procedures; tools and equipment; fasteners; servicing fuel…

  13. An Application of Practical Strategies in Assessing the Criterion-Related Validity of Credentialing Examinations.

    ERIC Educational Resources Information Center

    Fidler, James R.

    1993-01-01

    Criterion-related validities of 2 laboratory practitioner certification examinations for medical technologists (MTs) and medical laboratory technicians (MLTs) were assessed for 81 MT and 70 MLT examinees. Validity coefficients are presented for both measures. Overall, summative ratings yielded stronger validity coefficients than ratings based on…

  14. 3DFEMWATER/3DLEWASTE: NUMERICAL CODES FOR DELINEATING WELLHEAD PROTECTION AREAS IN AGRICULTURAL REGIONS BASED ON THE ASSIMILATIVE CAPACITY CRITERION

    EPA Science Inventory

    Two related numerical codes, 3DFEMWATER and 3DLEWASTE, are presented sed to delineate wellhead protection areas in agricultural regions using the assimilative capacity criterion. DFEMWATER (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media) ...

  15. Standards and Criteria. Paper #10 in Occasional Paper Series.

    ERIC Educational Resources Information Center

    Glass, Gene V.

    The logical and psychological bases for setting cutting scores for criterion-referenced tests are examined; they are found to be intrinsically arbitrary and are often examples of misdirected precision and axiomatization. The term, criterion referenced, originally referred to a technique for making test scores meaningful by controlling the test…

  16. Manitoba Schools Fitness 1989.

    ERIC Educational Resources Information Center

    Manitoba Dept. of Education, Winnipeg.

    This manual outlines physical fitness tests that may be used in the schools. The tests are based on criterion standards which indicate the levels of achievement at which health risk factors may be reduced. Test theory, protocols, and criterion charts are presented for: (1) muscle strength and endurance, (2) body composition, (3) flexibility, and…

  17. An Improved Correction for Range Restricted Correlations Under Extreme, Monotonic Quadratic Nonlinearity and Heteroscedasticity.

    PubMed

    Culpepper, Steven Andrew

    2016-06-01

    Standardized tests are frequently used for selection decisions, and the validation of test scores remains an important area of research. This paper builds upon prior literature about the effect of nonlinearity and heteroscedasticity on the accuracy of standard formulas for correcting correlations in restricted samples. Existing formulas for direct range restriction require three assumptions: (1) the criterion variable is missing at random; (2) a linear relationship between independent and dependent variables; and (3) constant error variance or homoscedasticity. The results in this paper demonstrate that the standard approach for correcting restricted correlations is severely biased in cases of extreme monotone quadratic nonlinearity and heteroscedasticity. This paper offers at least three significant contributions to the existing literature. First, a method from the econometrics literature is adapted to provide more accurate estimates of unrestricted correlations. Second, derivations establish bounds on the degree of bias attributed to quadratic functions under the assumption of a monotonic relationship between test scores and criterion measurements. New results are presented on the bias associated with using the standard range restriction correction formula, and the results show that the standard correction formula yields estimates of unrestricted correlations that deviate by as much as 0.2 for high to moderate selectivity. Third, Monte Carlo simulation results demonstrate that the new procedure for correcting restricted correlations provides more accurate estimates in the presence of quadratic and heteroscedastic test score and criterion relationships.

  18. Exploring DSM-5 criterion A in Acute Stress Disorder symptoms following natural disaster.

    PubMed

    Lavenda, Osnat; Grossman, Ephraim S; Ben-Ezra, Menachem; Hoffman, Yaakov

    2017-10-01

    The present study examines the DSM-5 Acute Stress Disorder (ASD) diagnostic criteria of exposure, in the context of a natural disaster. The study is based on the reports of 1001 Filipinos following the aftermath of super typhoon Haiyan in 2013. Participants reported exposure to injury, psychological distress and ASD symptoms. Findings indicated the association of criterion A with the prevalence of meeting all other ASD diagnostic criteria and high psychological distress. The diagnostic properties of Criterion A are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  20. In Silico Syndrome Prediction for Coronary Artery Disease in Traditional Chinese Medicine

    PubMed Central

    Lu, Peng; Chen, Jianxin; Zhao, Huihui; Gao, Yibo; Luo, Liangtao; Zuo, Xiaohan; Shi, Qi; Yang, Yiping; Yi, Jianqiang; Wang, Wei

    2012-01-01

    Coronary artery disease (CAD) is the leading causes of deaths in the world. The differentiation of syndrome (ZHENG) is the criterion of diagnosis and therapeutic in TCM. Therefore, syndrome prediction in silico can be improving the performance of treatment. In this paper, we present a Bayesian network framework to construct a high-confidence syndrome predictor based on the optimum subset, that is, collected by Support Vector Machine (SVM) feature selection. Syndrome of CAD can be divided into asthenia and sthenia syndromes. According to the hierarchical characteristics of syndrome, we firstly label every case three types of syndrome (asthenia, sthenia, or both) to solve several syndromes with some patients. On basis of the three syndromes' classes, we design SVM feature selection to achieve the optimum symptom subset and compare this subset with Markov blanket feature select using ROC. Using this subset, the six predictors of CAD's syndrome are constructed by the Bayesian network technique. We also design Naïve Bayes, C4.5 Logistic, Radial basis function (RBF) network compared with Bayesian network. In a conclusion, the Bayesian network method based on the optimum symptoms shows a practical method to predict six syndromes of CAD in TCM. PMID:22567030

  1. Analysis of non locality proofs in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Nisticò, Giuseppe

    2012-02-01

    Two kinds of non-locality theorems in Quantum Mechanics are taken into account: the theorems based on the criterion of reality and the quite different theorem proposed by Stapp. In the present work the analyses of the theorem due to Greenberger, Horne, Shimony and Zeilinger, based on the criterion of reality, and of Stapp's argument are shown. The results of these analyses show that the alleged violations of locality cannot be considered definitive.

  2. Risk-based containment and air monitoring criteria for work with dispersible radioactive materials.

    PubMed

    Veluri, Venkateswara Rao; Justus, Alan L

    2013-04-01

    This paper presents readily understood, technically defensible, risk-based containment and air monitoring criteria, which are developed from fundamental physical principles. The key for the development of each criterion was the use of a calculational de minimis level, in this case chosen to be 100 mrem (or 40 DAC-h). Examples are provided that demonstrate the effective use of each criterion. Comparison to other often used criteria is provided.

  3. Spectral reflectance indices as a selection criterion for yield improvement in wheat

    NASA Astrophysics Data System (ADS)

    Babar, Md. Ali

    2005-11-01

    Scope and methods of study. Yield in wheat ( Triticum aestivum L.) is a complex trait and influenced by many environmental factors, and yield improvement is a daunting task for wheat breeders. Spectral reflectance indices (SRIs) have been used to study different physiological traits in wheat. SRIs have the potential to differentiate genotypes for grain yield. SRIs strongly associated with grain yield can be used to achieve effective genetic gain in wheat under different environments. Three experiments (15 adapted genotypes, 25 and 36 random sister lines derived from two different crosses) under irrigated conditions, and three experiments (each with 30 advanced genotypes) under water-limited conditions were conducted in three successive years in Northwest Mexico at the CIMMYT (International Maize and wheat Improvement Center) experimental station. SRIs and different agronomic data were collected for three years, and biomass was harvested for two years. Phenotypic and genetic correlations between SRIs and grain yield, between SRIs and biomass, realized and broad sense heritability, direct and correlated selection responses for grain yield, and SRIs were calculated. Findings and conclusion. Seven SRIs were calculated, and three near infrared based indices (WI, NWI-1 and NWI-2) showed higher level of genetic and phenotypic correlations with grain yield, yield components and biomass than other SRIs (PRI, RNDVI, GNDVI, and SR) under both irrigated and water limiting environments. Moderate to high realized and broad sense heritability, and selection response were demonstrated by the three NIR based indices. High efficiency of correlated response for yield estimation was demonstrated by the three NIR based indices. The ratio between the correlated response to grain yield based on the three NIR based indices and direct selection response for grain yield was very close to one. The NIR based indices showed very high accuracy in selecting superior genotypes for grain yield under both well-watered and water-limited conditions. These results demonstrated that effective genetic gain in grain yield improvement can be achieved by making selections with the three NIR based indices.

  4. Scale-invariant entropy-based theory for dynamic ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahulikar, Shripad P., E-mail: spm@iitmandi.ac.in, E-mail: spm@aero.iitb.ac.in; Department of Aerospace Engineering, Indian Institute of Technology Bombay, Mumbai 400076; Kumari, Priti

    2014-09-01

    Dynamically Ordered self-organized dissipative structure exists in various forms and at different scales. This investigation first introduces the concept of an isolated embedding system, which embeds an open system, e.g., dissipative structure and its mass and/or energy exchange with its surroundings. Thereafter, scale-invariant theoretical analysis is presented using thermodynamic principles for Order creation, existence, and destruction. The sustainability criterion for Order existence based on its structured mass and/or energy interactions with the surroundings is mathematically defined. This criterion forms the basis for the interrelationship of physical parameters during sustained existence of dynamic Order. It is shown that the sufficient conditionmore » for dynamic Order existence is approached if its sustainability criterion is met, i.e., its destruction path is blocked. This scale-invariant approach has the potential to unify the physical understanding of universal dynamic ordering based on entropy considerations.« less

  5. Determination of babbit mechanical properties based on tin under static and cyclic loading

    NASA Astrophysics Data System (ADS)

    Zernin, M. V.

    2018-03-01

    Based on the results of studies of babbitt on the basis of tin under static loading under three types of stress state, the parameters of the criterion for the equivalence of stressed states were refined and a single diagram of the babbitt deformation was obtained. It is shown that the criterion of equivalence for static loading should contain the first principal stress and stress intensity. With cyclic loading, the first main voltage can be used as a criterion. The stages of development of fatigue cracks are described and it is logical to use a statistical approach to reveal the boundary of the transition from short cracks to macrocracks, based on a significant difference in the characteristics of the dispersion of the crack speeds at these two stages. The results of experimental studies of the cyclic crack resistance of babbitt are presented and the parameters of this boundary are obtained.

  6. A NEW INFRARED COLOR CRITERION FOR THE SELECTION OF 0 < z < 7 AGNs: APPLICATION TO DEEP FIELDS AND IMPLICATIONS FOR JWST SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messias, H.; Afonso, J.; Salvato, M.

    2012-08-01

    It is widely accepted that observations at mid-infrared (mid-IR) wavelengths enable the selection of galaxies with nuclear activity, which may not be revealed even in the deepest X-ray surveys. Many mid-IR color-color criteria have been explored to accomplish this goal and tested thoroughly in the literature. Besides missing many low-luminosity active galactic nuclei (AGNs), one of the main conclusions is that, with increasing redshift, the contamination by non-active galaxies becomes significant (especially at z {approx}> 2.5). This is problematic for the study of the AGN phenomenon in the early universe, the main goal of many of the current and futuremore » deep extragalactic surveys. In this work new near- and mid-IR color diagnostics are explored, aiming for improved efficiency-better completeness and less contamination-in selecting AGNs out to very high redshifts. We restrict our study to the James Webb Space Telescope wavelength range (0.6-27 {mu}m). The criteria are created based on the predictions by state-of-the-art galaxy and AGN templates covering a wide variety of galaxy properties, and tested against control samples with deep multi-wavelength coverage (ranging from the X-rays to radio frequencies). We show that the colors K{sub s} - [4.5], [4.5] - [8.0], and [8.0] - [24] are ideal as AGN/non-AGN diagnostics at, respectively, z {approx}< 1, 1 {approx}< z {approx}< 2.5, and z {approx}> 2.5-3. However, when the source redshift is unknown, these colors should be combined. We thus develop an improved IR criterion (using K{sub s} and IRAC bands, KI) as a new alternative at z {approx}< 2.5. KI does not show improved completeness (50%-60% overall) in comparison to commonly used Infrared Array Camera (IRAC) based AGN criteria, but is less affected by non-AGN contamination (revealing a >50%-90% level of successful AGN selection). We also propose KIM (using K{sub s} , IRAC, and MIPS 24 {mu}m bands, KIM), which aims to select AGN hosts from local distances to as far back as the end of reionization (0 < z {approx}< 7) with reduced non-AGN contamination. However, the necessary testing constraints and the small control-sample sizes prevent the confirmation of its improved efficiency at z {approx}> 2.5. Overall, KIM shows a {approx}30%-40% completeness and a >70%-90% level of successful AGN selection. KI and KIM are built to be reliable against a {approx}10%-20% error in flux, are based on existing filters, and are suitable for immediate use.« less

  7. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking †

    PubMed Central

    Kiku, Daisuke; Okutomi, Masatoshi

    2017-01-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking. PMID:29194407

  8. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    PubMed

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  9. An evaluation of height as an early selection criterion for volume and predictor of site index gain in the western gulf

    Treesearch

    E.M. Raley; D.P. Gwaze; T.D. Byram

    2003-01-01

    Data from repeated periodic measures of height, diameter and volume from eleven lobiolly pine progeny tests maintained as part of the Western Gulf Forest Tree Improvement Program (WGFTIP) were analyzed to 1) determine the potential of using early heighf diameter. or volume as selection criteria for rotation-age volume, and 2) to develop a method of expressing height...

  10. VizieR Online Data Catalog: Variability-selected AGN in Chandra DFS (Trevese+, 2008)

    NASA Astrophysics Data System (ADS)

    Trevese, D.; Boutsia, K.; Vagnetti, F.; Cappellaro, E.; Puccetti, S.

    2008-11-01

    Variability is a property shared by virtually all active galactic nuclei (AGNs), and was adopted as a criterion for their selection using data from multi epoch surveys. Low Luminosity AGNs (LLAGNs) are contaminated by the light of their host galaxies, and cannot therefore be detected by the usual colour techniques. For this reason, their evolution in cosmic time is poorly known. Consistency with the evolution derived from X-ray detected samples has not been clearly established so far, also because the low luminosity population consists of a mixture of different object types. LLAGNs can be detected by the nuclear optical variability of extended objects. Several variability surveys have been, or are being, conducted for the detection of supernovae (SNe). We propose to re-analyse these SNe data using a variability criterion optimised for AGN detection, to select a new AGN sample and study its properties. We analysed images acquired with the wide field imager at the 2.2m ESO/MPI telescope, in the framework of the STRESS supernova survey. We selected the AXAF field centred on the Chandra Deep Field South where, besides the deep X-ray survey, various optical data exist, originating in the EIS and COMBO-17 photometric surveys and the spectroscopic database of GOODS. (1 data file).

  11. 15 CFR 8b.13 - Employment criteria.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....13 Commerce and Foreign Trade Office of the Secretary of Commerce PROHIBITION OF DISCRIMINATION AGAINST THE HANDICAPPED IN FEDERALLY ASSISTED PROGRAMS OPERATED BY THE DEPARTMENT OF COMMERCE Employment... selection criterion that screens out or tends to screen out handicapped individuals or any class of...

  12. 15 CFR 8b.13 - Employment criteria.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....13 Commerce and Foreign Trade Office of the Secretary of Commerce PROHIBITION OF DISCRIMINATION AGAINST THE HANDICAPPED IN FEDERALLY ASSISTED PROGRAMS OPERATED BY THE DEPARTMENT OF COMMERCE Employment... selection criterion that screens out or tends to screen out handicapped individuals or any class of...

  13. 15 CFR 8b.13 - Employment criteria.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....13 Commerce and Foreign Trade Office of the Secretary of Commerce PROHIBITION OF DISCRIMINATION AGAINST THE HANDICAPPED IN FEDERALLY ASSISTED PROGRAMS OPERATED BY THE DEPARTMENT OF COMMERCE Employment... selection criterion that screens out or tends to screen out handicapped individuals or any class of...

  14. 15 CFR 8b.13 - Employment criteria.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....13 Commerce and Foreign Trade Office of the Secretary of Commerce PROHIBITION OF DISCRIMINATION AGAINST THE HANDICAPPED IN FEDERALLY ASSISTED PROGRAMS OPERATED BY THE DEPARTMENT OF COMMERCE Employment... selection criterion that screens out or tends to screen out handicapped individuals or any class of...

  15. 15 CFR 8b.13 - Employment criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....13 Commerce and Foreign Trade Office of the Secretary of Commerce PROHIBITION OF DISCRIMINATION AGAINST THE HANDICAPPED IN FEDERALLY ASSISTED PROGRAMS OPERATED BY THE DEPARTMENT OF COMMERCE Employment... selection criterion that screens out or tends to screen out handicapped individuals or any class of...

  16. Development of an updated tensile neck injury criterion.

    PubMed

    Parr, Jeffrey C; Miller, Michael E; Schubert Kabban, Christine M; Pellettiere, Joseph A; Perry, Chris E

    2014-10-01

    Ejection neck safety remains a concern in military aviation with the growing use of helmet mounted displays (HMDs) worn for entire mission durations. The original USAF tensile neck injury criterion proposed by Carter et al. (4) is updated and an injury protection limit for tensile loading is presented to evaluate escape system and HMD safety. An existent tensile neck injury criterion was updated through the addition of newer post mortem human subject (PMHS) tensile loading and injury data and the application of Survival Analysis to account for censoring in this data. The updated risk function was constructed with a combined human subject (N = 208) and PMHS (N = 22) data set. An updated AIS 3+ tensile neck injury criterion is proposed based upon human and PMHS data. This limit is significantly more conservative than the criterion proposed by Carter in 2000, yielding a 5% risk of AIS 3+ injury at a force of 1136 N as compared to a corresponding force of 1559 N. The inclusion of recent PMHS data into the original tensile neck injury criterion results in an injury protection limit that is significantly more conservative, as recent PMHS data is substantially less censored than the PMHS data included in the earlier criterion. The updated tensile risk function developed in this work is consistent with the tensile risk function published by the Federal Aviation Administration used as the basis for their neck injury criterion for side facing aircraft seats.

  17. Duration ratio discrimination in pigeons: a criterion-setting analysis.

    PubMed

    Fetterman, J Gregor

    2006-02-28

    Pigeons received trials beginning with a sequence of two colors (blue-->yellow) on the center key of a three-key array. The colors lasted different lengths of time. At the end of the sequence pigeons chose between two keys based on a criterial ratio of the temporal sequence. One choice was reinforced if the time ratio was less than the criterion and the alternate choice was reinforced if the time ratio was greater than the criterion. The criterial ratios (first to second duration) were 1:1, 1.5:1, and 3:1. The same set of intervals was used for the different criterion ratios, producing a balanced distribution of time ratios for the 1.5:1 condition, and unbalanced distributions for the 1:1 and 3:1 conditions. That is, for the 1.5:1 condition half of the duration pairs were less than the criterion and half were greater. However, for the 1:1 and 3:1 conditions, more duration pairs were less than (3:1) or greater than (1:1) the criterion. Accuracy was similar across criterion ratios, but response bias was influenced by the asymmetries of time ratios in the 1:1 and 3:1 conditions. When these asymmetries were controlled, the response biases were reduced or eliminated. These results indicate that pigeons are flexible in establishing a criterion for discriminating duration ratios, unlike humans, who are less flexible and are bound to categorical distinctions in the discrimination of duration ratios.

  18. Linear and curvilinear correlations of brain gray matter volume and density with age using voxel-based morphometry with the Akaike information criterion in 291 healthy children.

    PubMed

    Taki, Yasuyuki; Hashizume, Hiroshi; Thyreau, Benjamin; Sassa, Yuko; Takeuchi, Hikaru; Wu, Kai; Kotozaki, Yuka; Nouchi, Rui; Asano, Michiko; Asano, Kohei; Fukuda, Hiroshi; Kawashima, Ryuta

    2013-08-01

    We examined linear and curvilinear correlations of gray matter volume and density in cortical and subcortical gray matter with age using magnetic resonance images (MRI) in a large number of healthy children. We applied voxel-based morphometry (VBM) and region-of-interest (ROI) analyses with the Akaike information criterion (AIC), which was used to determine the best-fit model by selecting which predictor terms should be included. We collected data on brain structural MRI in 291 healthy children aged 5-18 years. Structural MRI data were segmented and normalized using a custom template by applying the diffeomorphic anatomical registration using exponentiated lie algebra (DARTEL) procedure. Next, we analyzed the correlations of gray matter volume and density with age in VBM with AIC by estimating linear, quadratic, and cubic polynomial functions. Several regions such as the prefrontal cortex, the precentral gyrus, and cerebellum showed significant linear or curvilinear correlations between gray matter volume and age on an increasing trajectory, and between gray matter density and age on a decreasing trajectory in VBM and ROI analyses with AIC. Because the trajectory of gray matter volume and density with age suggests the progress of brain maturation, our results may contribute to clarifying brain maturation in healthy children from the viewpoint of brain structure. Copyright © 2012 Wiley Periodicals, Inc.

  19. Topological materials discovery using electron filling constraints

    NASA Astrophysics Data System (ADS)

    Chen, Ru; Po, Hoi Chun; Neaton, Jeffrey B.; Vishwanath, Ashvin

    2018-01-01

    Nodal semimetals are classes of topological materials that have nodal-point or nodal-line Fermi surfaces, which give them novel transport and topological properties. Despite being highly sought after, there are currently very few experimental realizations, and identifying new materials candidates has mainly relied on exhaustive database searches. Here we show how recent studies on the interplay between electron filling and nonsymmorphic space-group symmetries can guide the search for filling-enforced nodal semimetals. We recast the previously derived constraints on the allowed band-insulator fillings in any space group into a new form, which enables effective screening of materials candidates based solely on their space group, electron count in the formula unit, and multiplicity of the formula unit. This criterion greatly reduces the computation load for discovering topological materials in a database of previously synthesized compounds. As a demonstration, we focus on a few selected nonsymmorphic space groups which are predicted to host filling-enforced Dirac semimetals. Of the more than 30,000 entires listed, our filling criterion alone eliminates 96% of the entries before they are passed on for further analysis. We discover a handful of candidates from this guided search; among them, the monoclinic crystal Ca2Pt2Ga is particularly promising.

  20. Active learning for semi-supervised clustering based on locally linear propagation reconstruction.

    PubMed

    Chang, Chin-Chun; Lin, Po-Yi

    2015-03-01

    The success of semi-supervised clustering relies on the effectiveness of side information. To get effective side information, a new active learner learning pairwise constraints known as must-link and cannot-link constraints is proposed in this paper. Three novel techniques are developed for learning effective pairwise constraints. The first technique is used to identify samples less important to cluster structures. This technique makes use of a kernel version of locally linear embedding for manifold learning. Samples neither important to locally linear propagation reconstructions of other samples nor on flat patches in the learned manifold are regarded as unimportant samples. The second is a novel criterion for query selection. This criterion considers not only the importance of a sample to expanding the space coverage of the learned samples but also the expected number of queries needed to learn the sample. To facilitate semi-supervised clustering, the third technique yields inferred must-links for passing information about flat patches in the learned manifold to semi-supervised clustering algorithms. Experimental results have shown that the learned pairwise constraints can capture the underlying cluster structures and proven the feasibility of the proposed approach. Copyright © 2014 Elsevier Ltd. All rights reserved.

Top