ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
ERIC Educational Resources Information Center
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model
ERIC Educational Resources Information Center
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo
2017-01-01
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
The Information a Test Provides on an Ability Parameter. Research Report. ETS RR-07-18
ERIC Educational Resources Information Center
Haberman, Shelby J.
2007-01-01
In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Model selection criterion in survival analysis
NASA Astrophysics Data System (ADS)
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Accounting for uncertainty in health economic decision models by using model averaging
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-01-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329
Saha, Tulshi D; Compton, Wilson M; Chou, S Patricia; Smith, Sharon; Ruan, W June; Huang, Boji; Pickering, Roger P; Grant, Bridget F
2012-04-01
Prior research has demonstrated the dimensionality of alcohol, nicotine and cannabis use disorders criteria. The purpose of this study was to examine the unidimensionality of DSM-IV cocaine, amphetamine and prescription drug abuse and dependence criteria and to determine the impact of elimination of the legal problems criterion on the information value of the aggregate criteria. Factor analyses and Item Response Theory (IRT) analyses were used to explore the unidimensionality and psychometric properties of the illicit drug use criteria using a large representative sample of the U.S. population. All illicit drug abuse and dependence criteria formed unidimensional latent traits. For amphetamines, cocaine, sedatives, tranquilizers and opioids, IRT models fit better for models without legal problems criterion than models with legal problems criterion and there were no differences in the information value of the IRT models with and without the legal problems criterion, supporting the elimination of that criterion. Consistent with findings for alcohol, nicotine and cannabis, amphetamine, cocaine, sedative, tranquilizer and opioid abuse and dependence criteria reflect underlying unitary dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the American Psychiatric Association's DSM-5 Substance and Related Disorders Workgroup. Published by Elsevier Ireland Ltd.
Comparison of Nurse Staffing Measurements in Staffing-Outcomes Research.
Park, Shin Hye; Blegen, Mary A; Spetz, Joanne; Chapman, Susan A; De Groot, Holly A
2015-01-01
Investigators have used a variety of operational definitions of nursing hours of care in measuring nurse staffing for health services research. However, little is known about which approach is best for nurse staffing measurement. To examine whether various nursing hours measures yield different model estimations when predicting patient outcomes and to determine the best method to measure nurse staffing based on the model estimations. We analyzed data from the University HealthSystem Consortium for 2005. The sample comprised 208 hospital-quarter observations from 54 hospitals, representing information on 971 adult-care units and about 1 million inpatient discharges. We compared regression models using different combinations of staffing measures based on productive/nonproductive and direct-care/indirect-care hours. Akaike Information Criterion and Bayesian Information Criterion were used in the assessment of staffing measure performance. The models that included the staffing measure calculated from productive hours by direct-care providers were best, in general. However, the Akaike Information Criterion and Bayesian Information Criterion differences between models were small, indicating that distinguishing nonproductive and indirect-care hours from productive direct-care hours does not substantially affect the approximation of the relationship between nurse staffing and patient outcomes. This study is the first to explicitly evaluate various measures of nurse staffing. Productive hours by direct-care providers are the strongest measure related to patient outcomes and thus should be preferred in research on nurse staffing and patient outcomes.
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Information Centralization of Organization Information Structures via Reports of Exceptions.
ERIC Educational Resources Information Center
Moskowitz, Herbert; Murnighan, John Keith
A team theoretic model that establishes a criterion (decision rule) for a financial institution branch to report exceptional loan requests to headquarters for action was compared to such choices made by graduate industrial management students acting as financial vice-presidents. Results showed that the loan size criterion specified by subjects was…
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
Entropic criterion for model selection
NASA Astrophysics Data System (ADS)
Tseng, Chih-Yuan
2006-10-01
Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.
Time series ARIMA models for daily price of palm oil
NASA Astrophysics Data System (ADS)
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Kurzeja, Patrick
2016-05-01
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas.
Kerridge, Bradley T.; Saha, Tulshi D.; Smith, Sharon; Chou, Patricia S.; Pickering, Roger P.; Huang, Boji; Ruan, June W.; Pulay, Attila J.
2012-01-01
Background Prior research has demonstrated the dimensionality of Diagnostic and Statistical Manual of Mental Disorders - Fourth Edition (DSM-IV) alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria. The purpose of this study was to examine the dimensionality of hallucinogen and inhalant/solvent abuse and dependence criteria. In addition, we assessed the impact of elimination of the legal problems abuse criterion on the information value of the aggregate abuse and dependence criteria, another proposed change for DSM- IV currently lacking empirical justification. Methods Factor analyses and item response theory (IRT) analyses were used to explore the unidimisionality and psychometric properties of hallucinogen and inhalant/solvent abuse and dependence criteria using a large representative sample of the United States (U.S.) general population. Results Hallucinogen and inhalant/solvent abuse and dependence criteria formed unidimensional latent traits. For both substances, IRT models without the legal problems abuse criterion demonstrated better fit than the corresponding model with the legal problem abuse criterion. Further, there were no differences in the information value of the IRT models with and without the legal problems abuse criterion, supporting the elimination of that criterion. No bias in the new diagnoses was observed by sex, age and race-ethnicity. Conclusion Consistent with findings for alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria, hallucinogen and inhalant/solvent criteria reflect underlying dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the DSM-V Substance and Related Disorders Workgroup, that is, combining DSM-IV abuse and dependence criteria and eliminating the legal problems abuse criterion. PMID:21621334
Kerridge, Bradley T; Saha, Tulshi D; Smith, Sharon; Chou, Patricia S; Pickering, Roger P; Huang, Boji; Ruan, June W; Pulay, Attila J
2011-09-01
Prior research has demonstrated the dimensionality of Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria. The purpose of this study was to examine the dimensionality of hallucinogen and inhalant/solvent abuse and dependence criteria. In addition, we assessed the impact of elimination of the legal problems abuse criterion on the information value of the aggregate abuse and dependence criteria, another proposed change for DSM-IV currently lacking empirical justification. Factor analyses and item response theory (IRT) analyses were used to explore the unidimisionality and psychometric properties of hallucinogen and inhalant/solvent abuse and dependence criteria using a large representative sample of the United States (U.S.) general population. Hallucinogen and inhalant/solvent abuse and dependence criteria formed unidimensional latent traits. For both substances, IRT models without the legal problems abuse criterion demonstrated better fit than the corresponding model with the legal problem abuse criterion. Further, there were no differences in the information value of the IRT models with and without the legal problems abuse criterion, supporting the elimination of that criterion. No bias in the new diagnoses was observed by sex, age and race-ethnicity. Consistent with findings for alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria, hallucinogen and inhalant/solvent criteria reflect underlying dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the DSM-V Substance and Related Disorders Workgroup, that is, combining DSM-IV abuse and dependence criteria and eliminating the legal problems abuse criterion. Published by Elsevier Ltd.
2016-01-01
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas. PMID:27279769
Beymer, Matthew R; Weiss, Robert E; Sugar, Catherine A; Bourque, Linda B; Gee, Gilbert C; Morisky, Donald E; Shu, Suzanne B; Javanbakht, Marjan; Bolan, Robert K
2017-01-01
Preexposure prophylaxis (PrEP) has emerged as a human immunodeficiency virus (HIV) prevention tool for populations at highest risk for HIV infection. Current US Centers for Disease Control and Prevention (CDC) guidelines for identifying PrEP candidates may not be specific enough to identify gay, bisexual, and other men who have sex with men (MSM) at the highest risk for HIV infection. We created an HIV risk score for HIV-negative MSM based on Syndemics Theory to develop a more targeted criterion for assessing PrEP candidacy. Behavioral risk assessment and HIV testing data were analyzed for HIV-negative MSM attending the Los Angeles LGBT Center between January 2009 and June 2014 (n = 9481). Syndemics Theory informed the selection of variables for a multivariable Cox proportional hazards model. Estimated coefficients were summed to create an HIV risk score, and model fit was compared between our model and CDC guidelines using the Akaike Information Criterion and Bayesian Information Criterion. Approximately 51% of MSM were above a cutpoint that we chose as an illustrative risk score to qualify for PrEP, identifying 75% of all seroconverting MSM. Our model demonstrated a better overall fit when compared with the CDC guidelines (Akaike Information Criterion Difference = 68) in addition to identifying a greater proportion of HIV infections. Current CDC PrEP guidelines should be expanded to incorporate substance use, partner-level, and other Syndemic variables that have been shown to contribute to HIV acquisition. Deployment of such personalized algorithms may better hone PrEP criteria and allow providers and their patients to make a more informed decision prior to PrEP use.
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Fong, Ted C T; Ho, Rainbow T H
2015-01-01
The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
New Stopping Criteria for Segmenting DNA Sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wentian
2001-06-18
We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Breakdown parameter for kinetic modeling of multiscale gas flows.
Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao
2014-06-01
Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
Bayesian analysis of CCDM models
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Comparing hierarchical models via the marginalized deviance information criterion.
Quintero, Adrian; Lesaffre, Emmanuel
2018-07-20
Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.
Andrews, Arthur R.; Bridges, Ana J.; Gomez, Debbie
2014-01-01
Purpose The aims of the study were to evaluate the orthogonality of acculturation for Latinos. Design Regression analyses were used to examine acculturation in two Latino samples (N = 77; N = 40). In a third study (N = 673), confirmatory factor analyses compared unidimensional and bidimensional models. Method Acculturation was assessed with the ARSMA-II (Studies 1 and 2), and language proficiency items from the Children of Immigrants Longitudinal Study (Study 3). Results In Studies 1 and 2, the bidimensional model accounted for slightly more variance (R2Study 1 = .11; R2Study 2 = .21) than the unidimensional model (R2Study 1 = .10; R2Study 2 = .19). In Study 3, the bidimensional model evidenced better fit (Akaike information criterion = 167.36) than the unidimensional model (Akaike information criterion = 1204.92). Discussion/Conclusions Acculturation is multidimensional. Implications for Practice Care providers should examine acculturation as a bidimensional construct. PMID:23361579
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesus, J.F.; Valentim, R.; Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γmore » = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.« less
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
NASA Astrophysics Data System (ADS)
Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki
2018-02-01
For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
Predictability of Seasonal Rainfall over the Greater Horn of Africa
NASA Astrophysics Data System (ADS)
Ngaina, J. N.
2016-12-01
The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions
Water-sediment controversy in setting environmental standards for selenium
Hamilton, Steven J.; Lemly, A. Dennis
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 μg/L. This commentary will present information to support a national water quality criterion for selenium of 2 μg/L, based on a wide array of support from federal, state, university, and international sources. Recently, two articles have argued for a sediment-based criterion and presented a model for deriving site-specific criteria. In one example, they calculate a criterion of 31 μg/L for a stream with a low sediment selenium toxicity threshold and low site-specific sediment total organic carbon content, which is substantially higher than the national criterion of 5 μg/L. Their basic premise for proposing a sediment-based method has been critically reviewed and problems in their approach are discussed.
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
Variable selection with stepwise and best subset approaches
2016-01-01
While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786
Cai, Qianqian; Turner, Brett D; Sheng, Daichao; Sloan, Scott
2018-03-01
The kinetics of fluoride sorption by calcite in the presence of metal ions (Co, Mn, Cd and Ba) have been investigated and modelled using the intra-particle diffusion (IPD), pseudo-second order (PSO), and the Hill 4 and Hill 5 kinetic models. Model comparison using the Akaike Information Criterion (AIC), the Schwarz Bayseian Information Criterion (BIC) and the Bayes Factor allows direct comparison of model results irrespective of the number of model parameters. Information Criterion results indicate "very strong" evidence that the Hill 5 model was the best fitting model for all observed data due to its ability to fit sigmoidal data, with confidence contour analysis showing the model parameters were well constrained by the data. Kinetic results were used to determine the thickness of a calcite permeable reactive barrier required to achieve up to 99.9% fluoride removal at a groundwater flow of 0.1 m.day -1 . Fluoride removal half-life (t 0.5 ) values were found to increase in the order Ba ≈ stonedust (a 99% pure natural calcite) < Cd < Co < Mn. A barrier width of 0.97 ± 0.02 m was found to be required for the fluoride/calcite (stonedust) only system when using no factor of safety, whilst in the presence of Mn and Co, the width increased to 2.76 ± 0.28 and 19.83 ± 0.37 m respectively. In comparison, the PSO model predicted a required barrier thickness of ∼46.0, 62.6 & 50.3 m respectively for the fluoride/calcite, Mn and Co systems under the same conditions. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Assessment of selenium effects in lotic ecosystems
Hamilton, Steven J.; Palace, Vince
2001-01-01
The selenium literature has grown substantially in recent years to encompass new information in a variety of areas. Correspondingly, several different approaches to establishing a new water quality criterion for selenium have been proposed since establishment of the national water quality criterion in 1987. Diverging viewpoints and interpretations of the selenium literature have lead to opposing perspectives on issues such as establishing a national criterion based on a sediment-based model, using hydrologic units to set criteria for stream reaches, and applying lentic-derived effects to lotic environments. This Commentary presents information on the lotic verse lentic controversy. Recently, an article was published that concluded that no adverse effects were occurring in a cutthroat trout population in a coldwater river with elevated selenium concentrations (C. J. Kennedy, L. E. McDonald, R. Loveridge, and M. M. Strosher, 2000, Arch. Environ. Contam. Toxicol. 39, 46–52). This article has added to the controversy rather than provided further insight into selenium toxicology. Information, or rather missing information, in the article has been critically reviewed and problems in the interpretations are discussed.
Ercanli, İlker; Kahriman, Aydın
2015-03-01
We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Ternès, Nils; Rotolo, Federico; Michiels, Stefan
2016-07-10
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
Testing the Distance-Duality Relation in the Rh = ct Universe
NASA Astrophysics Data System (ADS)
Hu, J.; Wang, F. Y.
2018-04-01
In this paper, we test the cosmic distance duality (CDD) relation using the luminosity distances from joint light-curve analysis (JLA) type Ia supernovae (SNe Ia) sample and angular diameter distance sample from galaxy clusters. The Rh = ct and ΛCDM models are considered. In order to compare the two models, we constrain the CCD relation and the SNe Ia light-curve parameters simultaneously. Considering the effects of Hubble constant, we find that η ≡ DA(1 + z)2/DL = 1 is valid at the 2σ confidence level in both models with H0 = 67.8 ± 0.9 km/s/Mpc. However, the CDD relation is valid at 3σ confidence level with H0 = 73.45 ± 1.66 km/s/Mpc. Using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), we find that the ΛCDM model is very strongly preferred over the Rh = ct model with these data sets for the CDD relation test.
Testing the distance-duality relation in the Rh = ct universe
NASA Astrophysics Data System (ADS)
Hu, J.; Wang, F. Y.
2018-07-01
In this paper, we test the cosmic distance-duality (CDD) relation using the luminosity distances from joint light-curve analysis Type Ia supernovae (SNe Ia) sample and angular diameter distance sample from galaxy clusters. The Rh = ct and Λ cold dark matter (CDM) models are considered. In order to compare the two models, we constrain the CDD relation and the SNe Ia light-curve parameters simultaneously. Considering the effects of Hubble constant, we find that η ≡ DA(1 + z)2/DL = 1 is valid at the 2σ confidence level in both models with H0= 67.8 ± 0.9 km -1s-1 Mpc. However, the CDD relation is valid at 3σ confidence level with H0= 73.45 ± 1.66 km -1s-1Mpc. Using the Akaike Information Criterion and the Bayesian Information Criterion, we find that the ΛCDM model is very stongly preferred over the Rh = ct model with these data sets for the CDD relation test.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
How Many Separable Sources? Model Selection In Independent Components Analysis
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988
The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
ERIC Educational Resources Information Center
Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2008-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…
Decohesion models informed by first-principles calculations: The ab initio tensile test
NASA Astrophysics Data System (ADS)
Enrique, Raúl A.; Van der Ven, Anton
2017-10-01
Extreme deformation and homogeneous fracture can be readily studied via ab initio methods by subjecting crystals to numerical "tensile tests", where the energy of locally stable crystal configurations corresponding to elongated and fractured states are evaluated by means of density functional method calculations. The information obtained can then be used to construct traction curves of cohesive zone models in order to address fracture at the macroscopic scale. In this work, we perform an in depth analysis of traction curves and how ab initio calculations must be interpreted to rigorously parameterize an atomic scale cohesive zone model, using crystalline Ag as an example. Our analysis of traction curves reveal the existence of two qualitatively distinct decohesion criteria: (i) an energy criterion whereby the released elastic energy equals the energy cost of creating two new surfaces and (ii) an instability criterion that occurs at a higher and size independent stress than that of the energy criterion. We find that increasing the size of the simulation cell renders parts of the traction curve inaccessible to ab initio calculations involving the uniform decohesion of the crystal. We also find that the separation distance below which a crack heals is not a material parameter as has been proposed in the past. Finally, we show that a large energy barrier separates the uniformly stressed crystal from the decohered crystal, resolving a paradox predicted by a scaling law based on the energy criterion that implies that large crystals will decohere under vanishingly small stresses. This work clarifies confusion in the literature as to how a cohesive zone model is to be parameterized with ab initio "tensile tests" in the presence of internal relaxations.
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
Development of a percutaneous penetration predictive model by SR-FTIR.
Jungman, E; Laugel, C; Rutledge, D N; Dumas, P; Baillet-Guffroy, A
2013-01-30
This work focused on developing a new evaluation criterion of percutaneous penetration, in complement to Log Pow and MW and based on high spatial resolution Fourier transformed infrared (FTIR) microspectroscopy with a synchrotron source (SR-FTIR). Classic Franz cell experiments were run and after 22 h molecule distribution in skin was determined either by HPLC or by SR-FTIR. HPLC data served as reference. HPLC and SR-FTIR results were compared and a new predictive criterion based from SR-FTIR results, named S(index), was determined using a multi-block data analysis technique (ComDim). A predictive cartography of the distribution of molecules in the skin was built and compared to OECD predictive cartography. This new criterion S(index) and the cartography using SR-FTIR/HPLC results provides relevant information for risk analysis regarding prediction of percutaneous penetration and could be used to build a new mathematical model. Copyright © 2012 Elsevier B.V. All rights reserved.
On the predictive information criteria for model determination in seismic hazard analysis
NASA Astrophysics Data System (ADS)
Varini, Elisa; Rotondi, Renata
2016-04-01
Many statistical tools have been developed for evaluating, understanding, and comparing models, from both frequentist and Bayesian perspectives. In particular, the problem of model selection can be addressed according to whether the primary goal is explanation or, alternatively, prediction. In the former case, the criteria for model selection are defined over the parameter space whose physical interpretation can be difficult; in the latter case, they are defined over the space of the observations, which has a more direct physical meaning. In the frequentist approaches, model selection is generally based on an asymptotic approximation which may be poor for small data sets (e.g. the F-test, the Kolmogorov-Smirnov test, etc.); moreover, these methods often apply under specific assumptions on models (e.g. models have to be nested in the likelihood ratio test). In the Bayesian context, among the criteria for explanation, the ratio of the observed marginal densities for two competing models, named Bayes Factor (BF), is commonly used for both model choice and model averaging (Kass and Raftery, J. Am. Stat. Ass., 1995). But BF does not apply to improper priors and, even when the prior is proper, it is not robust to the specification of the prior. These limitations can be extended to two famous penalized likelihood methods as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), since they are proved to be approximations of -2log BF . In the perspective that a model is as good as its predictions, the predictive information criteria aim at evaluating the predictive accuracy of Bayesian models or, in other words, at estimating expected out-of-sample prediction error using a bias-correction adjustment of within-sample error (Gelman et al., Stat. Comput., 2014). In particular, the Watanabe criterion is fully Bayesian because it averages the predictive distribution over the posterior distribution of parameters rather than conditioning on a point estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.
Final Environmental Assessment of Military Service Station Privatization at Five AETC Installations
2013-10-01
distinction (Criterion C); or • Have yielded, or may likely yield, information important in prehistory or history (Criterion D). Resources less than 50...important information in history or prehistory ; thus, it does not meet the requirement of Criterion D. Building 2109 is recommended not eligible for
Observational constraints on Hubble parameter in viscous generalized Chaplygin gas
NASA Astrophysics Data System (ADS)
Thakur, P.
2018-04-01
Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.
Growth curves for ostriches (Struthio camelus) in a Brazilian population.
Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P
2013-01-01
The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.
Model selection for the North American Breeding Bird Survey: A comparison of methods
Link, William; Sauer, John; Niven, Daniel
2017-01-01
The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.
Why noise is useful in functional and neural mechanisms of interval timing?
2013-01-01
Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391
Analysis of the observed and intrinsic durations of Swift/BAT gamma-ray bursts
NASA Astrophysics Data System (ADS)
Tarnopolski, Mariusz
2016-07-01
The duration distribution of 947 GRBs observed by Swift/BAT, as well as its subsample of 347 events with measured redshift, allowing to examine the durations in both the observer and rest frames, are examined. Using a maximum log-likelihood method, mixtures of two and three standard Gaussians are fitted to each sample, and the adequate model is chosen based on the value of the difference in the log-likelihoods, Akaike information criterion and Bayesian information criterion. It is found that a two-Gaussian is a better description than a three-Gaussian, and that the presumed intermediate-duration class is unlikely to be present in the Swift duration data.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Model weights and the foundations of multimodel inference
Link, W.A.; Barker, R.J.
2006-01-01
Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike?s information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Bürger, W; Streibelt, M
2015-02-01
Stepwise Occupational Reintegration (SOR) measures are of growing importance for the German statutory pension insurance. There is moderate evidence that patients with a poor prognosis in terms of a successful return to work, profit most from SOR measures. However, it is not clear to what extend these information are utilized when recommending SOR to a patient. A questionnaire was sent to 40406 persons (up to 59 years old, excluding rehabilitation after hospital stay) before admission to a medical rehabilitation service. The survey data were matched with data from the discharge report and information on the participation in a SOR measure. Initially, a single criterion was defined which describes the need of SOR measures. This criterion is based on 3 different items: patients with at least 12 weeks sickness absence, (a) a SIBAR score>7 and/or (b) a perceived need of SOR.The main aspect of our analyses was to describe the association between the SOR need-criterion and the participation in SOR measures as well as between the predictors of SOR participation when fulfilling the SOR need-criterion. The analyses were based on a multiple logistic regression model. For 16408 patients full data were available. The formal prerequisites for SOR were given for 33% of the sample, out of which 32% received a SOR after rehabilitation and 43% fulfilled the SOR needs criterion. A negative relationship between these 2 categories was observed (phi=-0.08, p<0.01). For patients that fulfilled the need-criterion the probability for participating in SOR decreased by 22% (RR=0.78). The probability of SOR participation increased with a decreasing SIBAR score (OR=0.56) and in patients who showed more confidence in being able be return to work. Participation in SOR measures cannot be predicted by the empirically defined SOR need-criterion: the probability even decreased when fulfilling the criterion. Furthermore, the results of a multivariate analysis show a positive selection of the patients who participate in SOR measures. Our results point strongly to the need of an indication guideline for physicians in rehabilitation centres. Further research addressing the success of SOR measures have to show whether the information used in this case can serve as a base for such a guideline. © Georg Thieme Verlag KG Stuttgart · New York.
Hao, Chen; LiJun, Chen; Albright, Thomas P.
2007-01-01
Invasive exotic species pose a growing threat to the economy, public health, and ecological integrity of nations worldwide. Explaining and predicting the spatial distribution of invasive exotic species is of great importance to prevention and early warning efforts. We are investigating the potential distribution of invasive exotic species, the environmental factors that influence these distributions, and the ability to predict them using statistical and information-theoretic approaches. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, for most species, absence data are not available. Presented with the challenge of developing a model based on presence-only information, we developed an improved logistic regression approach using Information Theory and Frequency Statistics to produce a relative suitability map. This paper generated a variety of distributions of ragweed (Ambrosia artemisiifolia L.) from logistic regression models applied to herbarium specimen location data and a suite of GIS layers including climatic, topographic, and land cover information. Our logistic regression model was based on Akaike's Information Criterion (AIC) from a suite of ecologically reasonable predictor variables. Based on the results we provided a new Frequency Statistical method to compartmentalize habitat-suitability in the native range. Finally, we used the model and the compartmentalized criterion developed in native ranges to "project" a potential distribution onto the exotic ranges to build habitat-suitability maps. ?? Science in China Press 2007.
Prajapati, Kalp Bhusan; Singh, Rajesh
2018-05-10
In present study batch tests were performed to investigate the enhancement in methane production under bio-electrolysis anaerobic co-digestion of sewage sludge and food waste. The bio-electrolysis reactor system (B-EL) yield more methane 148.5 ml/g COD in comparison to reactor system without bio-electrolysis (B-CONT) 125.1 ml/g COD. Whereas bio-electrolysis reactor system (C-EL) Iron Scraps amended yield lesser methane (51.2 ml/g COD) in comparison to control bio-electrolysis reactor system without Iron scraps (C-CONT - 114.4 ml/g COD). Richard and Exponential model were best fitted for cumulative methane production and biogas production rates respectively as revealed modelling study. The best model fit for the different reactors was compared by Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC). The bioelectrolysis process seems to be an emerging technology with lesser the loss in cellulase specific activity with increasing temperature from 50 to 80 °C. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modeling Dark Energy Through AN Ising Fluid with Network Interactions
NASA Astrophysics Data System (ADS)
Luongo, Orlando; Tommasini, Damiano
2014-12-01
We show that the dark energy (DE) effects can be modeled by using an Ising perfect fluid with network interactions, whose low redshift equation of state (EoS), i.e. ω0, becomes ω0 = -1 as in the ΛCDM model. In our picture, DE is characterized by a barotropic fluid on a lattice in the equilibrium configuration. Thus, mimicking the spin interaction by replacing the spin variable with an occupational number, the pressure naturally becomes negative. We find that the corresponding EoS mimics the effects of a variable DE term, whose limiting case reduces to the cosmological constant Λ. This permits us to avoid the introduction of a vacuum energy as DE source by hand, alleviating the coincidence and fine tuning problems. We find fairly good cosmological constraints, by performing three tests with supernovae Ia (SNeIa), baryonic acoustic oscillation (BAO) and cosmic microwave background (CMB) measurements. Finally, we perform the Akaike information criterion (AIC) and Bayesian information criterion (BIC) selection criteria, showing that our model is statistically favored with respect to the Chevallier-Polarsky-Linder (CPL) parametrization.
On a Model of a Nonlinear Feedback System for River Flow Prediction
NASA Astrophysics Data System (ADS)
Ozaki, T.
1980-02-01
A nonlinear system with feedback is proposed as a dynamic model for the hydrological system, whose input is the rainfall and whose output is the discharge of river flow. Parameters and orders of the model are estimated using Akaike's information criterion. Its application to the prediction of daily discharges of Kanna River and Bird Creek is discussed.
Donnellan, M Brent; Ackerman, Robert A; Brecheen, Courtney
2016-01-01
Although the Rosenberg Self-Esteem Scale (RSES) is the most widely used measure of global self-esteem in the literature, there are ongoing disagreements about its factor structure. This methodological debate informs how the measure should be used in substantive research. Using a sample of 1,127 college students, we test the overall fit of previously specified models for the RSES, including a newly proposed bifactor solution (McKay, Boduszek, & Harvey, 2014 ). We extend previous work by evaluating how various latent factors from these structural models are related to a set of criterion variables frequently studied in the self-esteem literature. A strict unidimensional model poorly fit the data, whereas models that accounted for correlations between negatively and positively keyed items tended to fit better. However, global factors from viable structural models had similar levels of association with criterion variables and with the pattern of results obtained with a composite global self-esteem variable calculated from observed scores. Thus, we did not find compelling evidence that different structural models had substantive implications, thereby reducing (but not eliminating) concerns about the integrity of the self-esteem literature based on overall composite scores for the RSES.
Garcia, Darren J.; Skadberg, Rebecca M.; Schmidt, Megan; ...
2018-03-05
The Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM–5]; American Psychiatric Association, 2013) Section III Alternative Model for Personality Disorders (AMPD) represents a novel approach to the diagnosis of personality disorder (PD). In this model, PD diagnosis requires evaluation of level of impairment in personality functioning (Criterion A) and characterization by pathological traits (Criterion B). Questions about clinical utility, complexity, and difficulty in learning and using the AMPD have been expressed in recent scholarly literature. We examined the learnability, interrater reliability, and clinical utility of the AMPD using a vignette methodology and graduate student raters. Results showed thatmore » student clinicians can learn Criterion A of the AMPD to a high level of interrater reliability and agreement with expert ratings. Interrater reliability of the 25 trait facets of the AMPD varied but showed overall acceptable levels of agreement. Examination of severity indexes of PD impairment showed the level of personality functioning (LPF) added information beyond that of global assessment of functioning (GAF). Clinical utility ratings were generally strong. Lastly, the satisfactory interrater reliability of components of the AMPD indicates the model, including the LPF, is very learnable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Darren J.; Skadberg, Rebecca M.; Schmidt, Megan
The Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM–5]; American Psychiatric Association, 2013) Section III Alternative Model for Personality Disorders (AMPD) represents a novel approach to the diagnosis of personality disorder (PD). In this model, PD diagnosis requires evaluation of level of impairment in personality functioning (Criterion A) and characterization by pathological traits (Criterion B). Questions about clinical utility, complexity, and difficulty in learning and using the AMPD have been expressed in recent scholarly literature. We examined the learnability, interrater reliability, and clinical utility of the AMPD using a vignette methodology and graduate student raters. Results showed thatmore » student clinicians can learn Criterion A of the AMPD to a high level of interrater reliability and agreement with expert ratings. Interrater reliability of the 25 trait facets of the AMPD varied but showed overall acceptable levels of agreement. Examination of severity indexes of PD impairment showed the level of personality functioning (LPF) added information beyond that of global assessment of functioning (GAF). Clinical utility ratings were generally strong. Lastly, the satisfactory interrater reliability of components of the AMPD indicates the model, including the LPF, is very learnable.« less
Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert
2016-09-01
The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.
Pak, Mehmet; Gülci, Sercan; Okumuş, Arif
2018-01-06
This study focuses on the geo-statistical assessment of spatial estimation models in forest crimes. Used widely in the assessment of crime and crime-dependent variables, geographic information system (GIS) helps the detection of forest crimes in rural regions. In this study, forest crimes (forest encroachment, illegal use, illegal timber logging, etc.) are assessed holistically and modeling was performed with ten different independent variables in GIS environment. The research areas are three Forest Enterprise Chiefs (Baskonus, Cinarpinar, and Hartlap) affiliated to Kahramanmaras Forest Regional Directorate in Kahramanmaras. An estimation model was designed using ordinary least squares (OLS) and geographically weighted regression (GWR) methods, which are often used in spatial association. Three different models were proposed in order to increase the accuracy of the estimation model. The use of variables with a variance inflation factor (VIF) value of lower than 7.5 in Model I and lower than 4 in Model II and dependent variables with significant robust probability values in Model III are associated with forest crimes. Afterwards, the model with the lowest corrected Akaike Information Criterion (AIC c ), and the highest R 2 value was selected as the comparison criterion. Consequently, Model III proved to be more accurate compared to other models. For Model III, while AIC c was 328,491 and R 2 was 0.634 for OLS-3 model, AIC c was 318,489 and R 2 was 0.741 for GWR-3 model. In this respect, the uses of GIS for combating forest crimes provide different scenarios and tangible information that will help take political and strategic measures.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion
NASA Astrophysics Data System (ADS)
Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo
2018-05-01
The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.
Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion
NASA Astrophysics Data System (ADS)
Tsankov, Tsanko V.; Czarnetzki, Uwe
2017-05-01
Non-equilibrium distribution functions of electrons and ions play an important role in plasma physics. A prominent example is the kinetic Bohm criterion. Since its first introduction it has been controversial for theoretical reasons and due to the lack of experimental data, in particular on the ion distribution function. Here we resolve the theoretical as well as the experimental difficulties by an exact solution of the kinetic Boltzmann equation including charge exchange collisions and ionization. This also allows for the first time non-invasive measurement of spatially resolved ion velocity distributions, absolute values of the ion and electron densities, temperatures, and mean energies as well as the electric field and the plasma potential in the entire plasma. The non-invasive access to the spatially resolved distribution functions of electrons and ions is applied to the problem of the kinetic Bohm criterion. Theoretically a so far missing term in the criterion is derived and shown to be of key importance. With the new term the validity of the kinetic criterion at high collisionality and its agreement with the fluid picture are restored. All findings are supported by experimental data, theory and a numerical model with excellent agreement throughout.
NASA Astrophysics Data System (ADS)
Cawiding, Olive R.; Natividad, Gina May R.; Bato, Crisostomo V.; Addawe, Rizavel C.
2017-11-01
The prevalence of typhoid fever in developing countries such as the Philippines calls for a need for accurate forecasting of the disease. This will be of great assistance in strategic disease prevention. This paper presents a development of useful models that predict the behavior of typhoid fever incidence based on the monthly incidence in the provinces of the Cordillera Administrative Region from 2010 to 2015 using univariate time series analysis. The data used was obtained from the Cordillera Office of the Department of Health (DOH-CAR). Seasonal autoregressive moving average (SARIMA) models were used to incorporate the seasonality of the data. A comparison of the results of the obtained models revealed that the SARIMA (1,1,7)(0,0,1)12 with a fixed coefficient at the seventh lag produces the smallest root mean square error (RMSE), mean absolute error (MAE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC). The model suggested that for the year 2016, the number of cases would increase from the months of July to September and have a drop in December. This was then validated using the data collected from January 2016 to December 2016.
Roberts, Steven; Martin, Michael A
2006-12-15
The shape of the dose-response relation between particulate matter air pollution and mortality is crucial for public health assessment, and departures of this relation from linearity could have important regulatory consequences. A number of investigators have studied the shape of the particulate matter-mortality dose-response relation and concluded that the relation could be adequately described by a linear model. Some of these researchers examined the hypothesis of linearity by comparing Akaike's Information Criterion (AIC) values obtained under linear, piecewise linear, and spline alternative models. However, at the current time, the efficacy of the AIC in this context has not been assessed. The authors investigated AIC as a means of comparing competing dose-response models, using data from Cook County, Illinois, for the period 1987-2000. They found that if nonlinearities exist, the AIC is not always successful in detecting them. In a number of the scenarios considered, AIC was equivocal, picking the correct simulated dose-response model about half of the time. These findings suggest that further research into the shape of the dose-response relation using alternative model selection criteria may be warranted.
Comparing simple respiration models for eddy flux and dynamic chamber data
Andrew D. Richardson; Bobby H. Braswell; David Y. Hollinger; Prabir Burman; Eric A. Davidson; Robert S. Evans; Lawrence B. Flanagan; J. William Munger; Kathleen Savage; Shawn P. Urbanski; Steven C. Wofsy
2006-01-01
Selection of an appropriate model for respiration (R) is important for accurate gap-filling of CO2 flux data, and for partitioning measurements of net ecosystem exchange (NEE) to respiration and gross ecosystem exchange (GEE). Using cross-validation methods and a version of Akaike's Information Criterion (AIC), we evaluate a wide range of...
da Silva, Wanderson Roberto; Dias, Juliana Chioda Ribeiro; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2014-09-01
This study aimed at evaluating the validity, reliability, and factorial invariance of the complete (34-item) and shortened (8-item and 16-item) versions of the Body Shape Questionnaire (BSQ) when applied to Brazilian university students. A total of 739 female students with a mean age of 20.44 (standard deviation=2.45) years participated. Confirmatory factor analysis was conducted to verify the degree to which the one-factor structure satisfies the proposal for the BSQ's expected structure. Two items of the 34-item version were excluded because they had factor weights (λ)<40. All models had adequate convergent validity (average variance extracted=.43-.58; composite reliability=.85-.97) and internal consistency (α=.85-.97). The 8-item B version was considered the best shortened BSQ version (Akaike information criterion=84.07, Bayes information criterion=157.75, Browne-Cudeck criterion=84.46), with strong invariance for independent samples (Δχ(2)λ(7)=5.06, Δχ(2)Cov(8)=5.11, Δχ(2)Res(16)=19.30). Copyright © 2014 Elsevier Ltd. All rights reserved.
Aragón-Noriega, Eugenio Alberto
2013-09-01
Growth models of marine animals, for fisheries and/or aquaculture purposes, are based on the popular von Bertalanffy model. This tool is mostly used because its parameters are used to evaluate other fisheries models, such as yield per recruit; nevertheless, there are other alternatives (such as Gompertz, Logistic, Schnute) not yet used by fishery scientists, that may result useful depending on the studied species. The penshell Atrina maura, has been studied for fisheries or aquaculture supplies, but its individual growth has not yet been studied before. The aim of this study was to model the absolute growth of the penshell A. maura using length-age data. For this, five models were assessed to obtain growth parameters: von Bertalanffy, Gompertz, Logistic, Schnute case 1 and Schnute and Richards. The criterion used to select the best models was the Akaike information criterion, as well as the residual squared sum and R2 adjusted. To get the average asymptotic length, the multi model inference approach was used. According to Akaike information criteria, the Gompertz model better described the absolute growth of A. maura. Following the multi model inference approach the average asymptotic shell length was 218.9 mm (IC 212.3-225.5) of shell length. I concluded that the use of the multi model approach and the Akaike information criteria represented the most robust method for growth parameter estimation of A. maura and the von Bertalanffy growth model should not be selected a priori as the true model to obtain the absolute growth in bivalve mollusks like in the studied species in this paper.
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
Ng, Edmond S-W; Diaz-Ordaz, Karla; Grieve, Richard; Nixon, Richard M; Thompson, Simon G; Carpenter, James R
2016-10-01
Multilevel models provide a flexible modelling framework for cost-effectiveness analyses that use cluster randomised trial data. However, there is a lack of guidance on how to choose the most appropriate multilevel models. This paper illustrates an approach for deciding what level of model complexity is warranted; in particular how best to accommodate complex variance-covariance structures, right-skewed costs and missing data. Our proposed models differ according to whether or not they allow individual-level variances and correlations to differ across treatment arms or clusters and by the assumed cost distribution (Normal, Gamma, Inverse Gaussian). The models are fitted by Markov chain Monte Carlo methods. Our approach to model choice is based on four main criteria: the characteristics of the data, model pre-specification informed by the previous literature, diagnostic plots and assessment of model appropriateness. This is illustrated by re-analysing a previous cost-effectiveness analysis that uses data from a cluster randomised trial. We find that the most useful criterion for model choice was the deviance information criterion, which distinguishes amongst models with alternative variance-covariance structures, as well as between those with different cost distributions. This strategy for model choice can help cost-effectiveness analyses provide reliable inferences for policy-making when using cluster trials, including those with missing data. © The Author(s) 2013.
Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S
2015-08-07
Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
Convergent, discriminant, and criterion validity of DSM-5 traits.
Yalch, Matthew M; Hopwood, Christopher J
2016-10-01
Section III of the Diagnostic and Statistical Manual of Mental Disorders (5th edi.; DSM-5; American Psychiatric Association, 2013) contains a system for diagnosing personality disorder based in part on assessing 25 maladaptive traits. Initial research suggests that this aspect of the system improves the validity and clinical utility of the Section II Model. The Computer Adaptive Test of Personality Disorder (CAT-PD; Simms et al., 2011) contains many similar traits as the DSM-5, as well as several additional traits seemingly not covered in the DSM-5. In this study we evaluate the convergent and discriminant validity between the DSM-5 traits, as assessed by the Personality Inventory for DSM-5 (PID-5; Krueger et al., 2012), and CAT-PD in an undergraduate sample, and test whether traits included in the CAT-PD but not the DSM-5 provide incremental validity in association with clinically relevant criterion variables. Results supported the convergent and discriminant validity of the PID-5 and CAT-PD scales in their assessment of 23 out of 25 DSM-5 traits. DSM-5 traits were consistently associated with 11 criterion variables, despite our having intentionally selected clinically relevant criterion constructs not directly assessed by DSM-5 traits. However, the additional CAT-PD traits provided incremental information above and beyond the DSM-5 traits for all criterion variables examined. These findings support the validity of pathological trait models in general and the DSM-5 and CAT-PD models in particular, while also suggesting that the CAT-PD may include additional traits for consideration in future iterations of the DSM-5 system. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
A Novel Statistical Analysis and Interpretation of Flow Cytometry Data
2013-03-31
the resulting residuals appear random. In the work that follows, I∗ = 200. The values of B and b̂j are known from the experiment. Notice that the...conjunction with the model parameter vector in a two- stage process. Unfortunately two- stage estimation may cause some parameters of the mathematical model to...information theoretic criteria such as Akaike’s Information Criterion (AIC). From (4.3), it follows that the scaled residuals rjk = λjI[n̂](tj , zk; ~q
A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Edmonston, Leon P.; Randall, Robert S.
A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…
Criteria for determining a basic health services package. Recent developments in The Netherlands.
Stolk, E A; Poley, M J
2005-03-01
The criterion of medical need figures prominently in the Dutch model for reimbursement decisions as well as in many international models for health care priority setting. Nevertheless the conception of need remains too vague and general to be applied successfully in priority decisions. This contribution explores what is wrong with the proposed definitions of medical need and identifies features in the decision-making process that inhibit implementation and usefulness of this criterion. In contrast to what is commonly assumed, the problem is not so much a failure to understand the nature of the medical need criterion and the value judgments involved. Instead the problem seems to be a mismatch between the information regarding medical need and the way in which these concerns are incorporated into policy models. Criteria--medical need, as well as other criteria such as effectiveness and cost-effectiveness--are usually perceived as "hurdles," and each intervention can pass or fail assessment on the basis of each criterion and therefore be included or excluded from public funding. These models fail to understand that choices are not so much between effective and ineffective treatments, or necessary and unnecessary ones. Rather, choices are often between interventions that are somewhat effective and/or needed. Evaluation of such services requires a holistic approach and not a sequence of fail or pass judgments. To improve applicability of criteria that pertain to medical need we therefore suggest further development of these criteria beyond their original binary meaning and propose meaningful ways in which these criteria can be integrated into policy decisions.
Effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum.
Kadam, Shekhar U; Tiwari, Brijesh K; O'Donnell, Colm P
2015-03-01
The effect of ultrasound pre-treatment on the drying kinetics of brown seaweed Ascophyllum nodosum under hot-air convective drying was investigated. Pretreatments were carried out at ultrasound intensity levels ranging from 7.00 to 75.78 Wcm(-2) for 10 min using an ultrasonic probe system. It was observed that ultrasound pre-treatments reduced the drying time required. The shortest drying times were obtained from samples pre-treated at 75.78 Wcm(-2). The fit quality of 6 thin-layer drying models was also evaluated using the determination of coefficient (R(2)), root means square error (RMSE), AIC (Akaike information criterion) and BIC (Bayesian information criterion). Drying kinetics were modelled using the Newton, Henderson and Pabis, Page, Wang and Singh, Midilli et al. and Weibull models. The Newton, Wang and Singh, and Midilli et al. models showed the best fit to the experimental drying data. Color of ultrasound pretreated dried seaweed samples were lighter compared to control samples. It was concluded that ultrasound pretreatment can be effectively used to reduce the energy cost and drying time for drying of A. nodosum. Copyright © 2014 Elsevier B.V. All rights reserved.
Experiments in Error Propagation within Hierarchal Combat Models
2015-09-01
Bayesian Information Criterion CNO Chief of Naval Operations DOE Design of Experiments DOD Department of Defense MANA Map Aware Non-uniform Automata ...ground up” approach. First, it develops a mission-level model for one on one submarine combat in Map Aware Non-uniform Automata (MANA) simulation, an... Automata (MANA), an agent based simulation that can model the different postures of submarines. It feeds the results from MANA into stochastic
Semivariogram modeling by weighted least squares
Jian, X.; Olea, R.A.; Yu, Y.-S.
1996-01-01
Permissible semivariogram models are fundamental for geostatistical estimation and simulation of attributes having a continuous spatiotemporal variation. The usual practice is to fit those models manually to experimental semivariograms. Fitting by weighted least squares produces comparable results to fitting manually in less time, systematically, and provides an Akaike information criterion for the proper comparison of alternative models. We illustrate the application of a computer program with examples showing the fitting of simple and nested models. Copyright ?? 1996 Elsevier Science Ltd.
Analysis of survival in breast cancer patients by using different parametric models
NASA Astrophysics Data System (ADS)
Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti
2017-09-01
In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.
NASA Astrophysics Data System (ADS)
Jia, Chen; Chen, Yong
2015-05-01
In the work of Amann, Schmiedl and Seifert (2010 J. Chem. Phys. 132 041102), the authors derived a sufficient criterion to identify a non-equilibrium steady state (NESS) in a three-state Markov system based on the coarse-grained information of two-state trajectories. In this paper, we present a mathematical derivation and provide a probabilistic interpretation of the Amann-Schmiedl-Seifert (ASS) criterion. Moreover, the ASS criterion is compared with some other criterions for a NESS.
Marcus, Alonna; Wilder, David A
2009-01-01
Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism.
Modeling crime events by d-separation method
NASA Astrophysics Data System (ADS)
Aarthee, R.; Ezhilmaran, D.
2017-11-01
Problematic legal cases have recently called for a scientifically founded method of dealing with the qualitative and quantitative roles of evidence in a case [1].To deal with quantitative, we proposed a d-separation method for modeling the crime events. A d-separation is a graphical criterion for identifying independence in a directed acyclic graph. By developing a d-separation method, we aim to lay the foundations for the development of a software support tool that can deal with the evidential reasoning in legal cases. Such a tool is meant to be used by a judge or juror, in alliance with various experts who can provide information about the details. This will hopefully improve the communication between judges or jurors and experts. The proposed method used to uncover more valid independencies than any other graphical criterion.
Optimal experimental designs for fMRI when the model matrix is uncertain.
Kao, Ming-Hung; Zhou, Lin
2017-07-15
This study concerns optimal designs for functional magnetic resonance imaging (fMRI) experiments when the model matrix of the statistical model depends on both the selected stimulus sequence (fMRI design), and the subject's uncertain feedback (e.g. answer) to each mental stimulus (e.g. question) presented to her/him. While practically important, this design issue is challenging. This mainly is because that the information matrix cannot be fully determined at the design stage, making it difficult to evaluate the quality of the selected designs. To tackle this challenging issue, we propose an easy-to-use optimality criterion for evaluating the quality of designs, and an efficient approach for obtaining designs optimizing this criterion. Compared with a previously proposed method, our approach requires a much less computing time to achieve designs with high statistical efficiencies. Copyright © 2017 Elsevier Inc. All rights reserved.
Zimmermann, Johannes; Böhnke, Jan R; Eschstruth, Rhea; Mathews, Alessa; Wenzel, Kristin; Leising, Daniel
2015-08-01
The alternative model for the classification of personality disorders (PD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) Section III comprises 2 major components: impairments in personality functioning (Criterion A) and maladaptive personality traits (Criterion B). In this study, we investigated the latent structure of Criterion A (a) within subdomains, (b) across subdomains, and (c) in conjunction with the Criterion B trait facets. Data were gathered as part of an online study that collected other-ratings by 515 laypersons and 145 therapists. Laypersons were asked to assess 1 of their personal acquaintances, whereas therapists were asked to assess 1 of their patients, using 135 items that captured features of Criteria A and B. We were able to show that (a) the structure within the Criterion A subdomains can be appropriately modeled using generalized graded unfolding models, with results suggesting that the items are indeed related to common underlying constructs but often deviate from their theoretically expected severity level; (b) the structure across subdomains is broadly in line with a model comprising 2 strongly correlated factors of self- and interpersonal functioning, with some notable deviations from the theoretical model; and (c) the joint structure of the Criterion A subdomains and the Criterion B facets broadly resembles the expected model of 2 plus 5 factors, albeit the loading pattern suggests that the distinction between Criteria A and B is somewhat blurry. Our findings provide support for several major assumptions of the alternative DSM-5 model for PD but also highlight aspects of the model that need to be further refined. (c) 2015 APA, all rights reserved).
Spotted Towhee population dynamics in a riparian restoration context
Stacy L. Small; Frank R., III Thompson; Geoffery R. Geupel; John Faaborg
2007-01-01
We investigated factors at multiple scales that might influence nest predation risk for Spotted Towhees (Pipilo maculates) along the Sacramento River, California, within the context of large-scale riparian habitat restoration. We used the logistic-exposure method and Akaike's information criterion (AIC) for model selection to compare predator...
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing
2017-11-02
Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.
Marcus, Alonna; Wilder, David A
2009-01-01
Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism. PMID:19949521
Stefanutti, Luca; Robusto, Egidio; Vianello, Michelangelo; Anselmi, Pasquale
2013-06-01
A formal model is proposed that decomposes the implicit association test (IAT) effect into three process components: stimuli discrimination, automatic association, and termination criterion. Both response accuracy and reaction time are considered. Four independent and parallel Poisson processes, one for each of the four label categories of the IAT, are assumed. The model parameters are the rate at which information accrues on the counter of each process and the amount of information that is needed before a response is given. The aim of this study is to present the model and an illustrative application in which the process components of a Coca-Pepsi IAT are decomposed.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
The Brockport Physical Fitness Test Training Manual. [Project Target].
ERIC Educational Resources Information Center
Winnick, Joseph P.; Short, Francis X., Ed.
This training manual presents information on the Brockport Physical Fitness Test (BPFT), a criterion-referenced fitness test for children and adolescents with disabilities. The first chapter of the test training manual includes information dealing with health-related criterion-referenced testing, the interaction between physical activity and…
Rocha, R R A; Thomaz, S M; Carvalho, P; Gomes, L C
2009-06-01
The need for prediction is widely recognized in limnology. In this study, data from 25 lakes of the Upper Paraná River floodplain were used to build models to predict chlorophyll-a and dissolved oxygen concentrations. Akaike's information criterion (AIC) was used as a criterion for model selection. Models were validated with independent data obtained in the same lakes in 2001. Predictor variables that significantly explained chlorophyll-a concentration were pH, electrical conductivity, total seston (positive correlation) and nitrate (negative correlation). This model explained 52% of chlorophyll variability. Variables that significantly explained dissolved oxygen concentration were pH, lake area and nitrate (all positive correlations); water temperature and electrical conductivity were negatively correlated with oxygen. This model explained 54% of oxygen variability. Validation with independent data showed that both models had the potential to predict algal biomass and dissolved oxygen concentration in these lakes. These findings suggest that multiple regression models are valuable and practical tools for understanding the dynamics of ecosystems and that predictive limnology may still be considered a powerful approach in aquatic ecology.
Frank, Matthias; Bockholdt, Britta; Peters, Dieter; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Hinz, Peter
2011-05-20
Blunt ballistic impact trauma is a current research topic due to the widespread use of kinetic energy munitions in law enforcement. In the civilian setting, an automatic dummy launcher has recently been identified as source of blunt impact trauma. However, there is no data on the injury risk of conventional dummy launchers. It is the aim of this investigation to predict potential impact injury to the human head and chest on the basis of the Blunt Criterion which is an energy based blunt trauma model to assess vulnerability to blunt weapons, projectile impacts, and behind-armor-exposures. Based on experimentally investigated kinetic parameters, the injury risk of two commercially available gundog retrieval devices (Waidwerk Telebock, Germany; Turner Richards, United Kingdom) was assessed using the Blunt Criterion trauma model for blunt ballistic impact trauma to the head and chest. Assessing chest impact, the Blunt Criterion values for both shooting devices were higher than the critical Blunt Criterion value of 0.37, which represents a 50% risk of sustaining a thoracic skeletal injury of AIS 2 (moderate injury) or AIS 3 (serious injury). The maximum Blunt Criterion value (1.106) was higher than the Blunt Criterion value corresponding to AIS 4 (severe injury). With regard to the impact injury risk to the head, both devices surpass by far the critical Blunt Criterion value of 1.61, which represents a 50% risk of skull fracture. Highest Blunt Criterion values were measured for the Turner Richards Launcher (2.884) corresponding to a risk of skull fracture of higher than 80%. Even though the classification as non-guns by legal authorities might implicate harmlessness, the Blunt Criterion trauma model illustrates the hazardous potential of these shooting devices. The Blunt Criterion trauma model links the laboratory findings to the impact injury patterns of the head and chest that might be expected. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness
NASA Astrophysics Data System (ADS)
Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan
To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
Multimodel predictive system for carbon dioxide solubility in saline formation waters.
Wang, Zan; Small, Mitchell J; Karamalidis, Athanasios K
2013-02-05
The prediction of carbon dioxide solubility in brine at conditions relevant to carbon sequestration (i.e., high temperature, pressure, and salt concentration (T-P-X)) is crucial when this technology is applied. Eleven mathematical models for predicting CO(2) solubility in brine are compared and considered for inclusion in a multimodel predictive system. Model goodness of fit is evaluated over the temperature range 304-433 K, pressure range 74-500 bar, and salt concentration range 0-7 m (NaCl equivalent), using 173 published CO(2) solubility measurements, particularly selected for those conditions. The performance of each model is assessed using various statistical methods, including the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Different models emerge as best fits for different subranges of the input conditions. A classification tree is generated using machine learning methods to predict the best-performing model under different T-P-X subranges, allowing development of a multimodel predictive system (MMoPS) that selects and applies the model expected to yield the most accurate CO(2) solubility prediction. Statistical analysis of the MMoPS predictions, including a stratified 5-fold cross validation, shows that MMoPS outperforms each individual model and increases the overall accuracy of CO(2) solubility prediction across the range of T-P-X conditions likely to be encountered in carbon sequestration applications.
Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion
Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.
2016-01-01
Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G
2013-01-01
Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Modelling of Local Necking and Fracture in Aluminium Alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achani, D.; Eriksson, M.; Hopperstad, O. S.
2007-05-17
Non-linear Finite Element simulations are extensively used in forming and crashworthiness studies of automotive components and structures in which fracture need to be controlled. For thin-walled ductile materials, the fracture-related phenomena that must be properly represented are thinning instability, ductile fracture and through-thickness shear instability. Proper representation of the fracture process relies on the accuracy of constitutive and fracture models and their parameters that need to be calibrated through well defined experiments. The present study focuses on local necking and fracture which is of high industrial importance, and uses a phenomenological criterion for modelling fracture in aluminium alloys. As anmore » accurate description of plastic anisotropy is important, advanced phenomenological constitutive equations based on the yield criterion YLD2000/YLD2003 are used. Uniaxial tensile tests and disc compression tests are performed for identification of the constitutive model parameters. Ductile fracture is described by the Cockcroft-Latham fracture criterion and an in-plane shear tests is performed to identify the fracture parameter. The reason is that in a well designed in-plane shear test no thinning instability should occur and it thus gives more direct information about the phenomenon of ductile fracture. Numerical simulations have been performed using a user-defined material model implemented in the general-purpose non-linear FE code LS-DYNA. The applicability of the model is demonstrated by correlating the predicted and experimental response in the in-plane shear tests and additional plane strain tension tests.« less
Rational GARCH model: An empirical test for stock returns
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2017-05-01
We propose a new ARCH-type model that uses a rational function to capture the asymmetric response of volatility to returns, known as the "leverage effect". Using 10 individual stocks on the Tokyo Stock Exchange and two stock indices, we compare the new model with several other asymmetric ARCH-type models. We find that according to the deviance information criterion, the new model ranks first for several stocks. Results show that the proposed new model can be used as an alternative asymmetric ARCH-type model in empirical applications.
Bai, Xiaoming; Bessa, Miguel A.; Melro, Antonio R.; ...
2016-10-01
The authors would like to inform that one of the modifications proposed in the article “High-fidelity micro-scale modeling of the thermo-visco-plastic behavior of carbon fiber polymer matrix composites” [1] was found to be unnecessary: the paraboloid yield criterion is sufficient to describe the shear behavior of the epoxy matrix considered (Epoxy 3501-6). The authors recently noted that the experimental work [2] used to validate the pure matrix response considered engineering shear strain instead of its tensorial counter-part, which caused the apparent inconsistency with the paraboloid yield criterion. A recently proposed temperature dependency law for glassy polymers is evaluated herein, thusmore » better agreement with the experimental results for this epoxy is observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Xiaoming; Bessa, Miguel A.; Melro, Antonio R.
The authors would like to inform that one of the modifications proposed in the article “High-fidelity micro-scale modeling of the thermo-visco-plastic behavior of carbon fiber polymer matrix composites” [1] was found to be unnecessary: the paraboloid yield criterion is sufficient to describe the shear behavior of the epoxy matrix considered (Epoxy 3501-6). The authors recently noted that the experimental work [2] used to validate the pure matrix response considered engineering shear strain instead of its tensorial counter-part, which caused the apparent inconsistency with the paraboloid yield criterion. A recently proposed temperature dependency law for glassy polymers is evaluated herein, thusmore » better agreement with the experimental results for this epoxy is observed.« less
Mixture Rasch model for guessing group identification
NASA Astrophysics Data System (ADS)
Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling
2013-04-01
Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
The development of an accurate and efficient algorithm for analyzing the structure of MSS data, the application of the Akaiki information criterion to mixture models, and a research plan to delineate some of the technical issues and associated tasks in the area of rice scene radiation characterization are discussed. The AMOEBA clustering algorithm is refined and documented.
Examinations Wash Back Effects: Challenges to the Criterion Referenced Assessment Model
ERIC Educational Resources Information Center
Mogapi, M.
2016-01-01
Examinations play a central role in the educational system due to the fact that information generated from examinations is used for a variety of purposes. Critical decisions such as selection, placement and determining the instructional effectives of a programme of study all depend on data generated from examinations. Numerous research studies…
Gruner, S. V.; Slone, D.H.; Capinera, J.L.; Turco, M. P.
2017-01-01
Calliphorid species form larval aggregations that are capable of generating heat above ambient temperature. We wanted to determine the relationship between volume, number of larvae, and different combinations of instars on larval mass heat generation. We compared different numbers of Chrysomya megacephala (F.) larvae (40, 100, 250, 600, and 2,000), and different combinations of instars (∼50/50 first and second instars, 100% second instars, ∼50/50 second and third instars, and 100% third instars) at two different ambient temperatures (20 and 30 °C). We compared 13 candidate multiple regression models that were fitted to the data; the models were then scored and ranked with Akaike information criterion and Bayesian information criterion. The results indicate that although instar, age, treatment temperature, elapsed time, and number of larvae in a mass were significant, larval volume was the best predictor of larval mass temperatures. The volume of a larval mass may need to be taken into consideration for determination of a postmortem interval.
Boutet, Isabelle; Collin, Charles A; MacLeod, Lindsey S; Messier, Claude; Holahan, Matthew R; Berry-Kravis, Elizabeth; Gandhi, Reno M; Kogan, Cary S
2018-01-01
To generate meaningful information, translational research must employ paradigms that allow extrapolation from animal models to humans. However, few studies have evaluated translational paradigms on the basis of defined validation criteria. We outline three criteria for validating translational paradigms. We then evaluate the Hebb-Williams maze paradigm (Hebb and Williams, 1946; Rabinovitch and Rosvold, 1951) on the basis of these criteria using Fragile X syndrome (FXS) as model disease. We focused on this paradigm because it allows direct comparison of humans and animals on tasks that are behaviorally equivalent (criterion #1) and because it measures spatial information processing, a cognitive domain for which FXS individuals and mice show impairments as compared to controls (criterion #2). We directly compared the performance of affected humans and mice across different experimental conditions and measures of behavior to identify which conditions produce comparable patterns of results in both species. Species differences were negligible for Mazes 2, 4, and 5 irrespective of the presence of visual cues, suggesting that these mazes could be used to measure spatial learning in both species. With regards to performance on the first trial, which reflects visuo-spatial problem solving, Mazes 5 and 9 without visual cues produced the most consistent results. We conclude that the Hebb-Williams mazes paradigm has the potential to be utilized in translational research to measure comparable cognitive functions in FXS humans and animals (criterion #3).
Lindström, Martin; Axén, Elin; Lindström, Christine; Beckman, Anders; Moghaddassi, Mahnaz; Merlo, Juan
2006-12-01
The aim of this study was to investigate the influence of contextual (social capital and administrative/neo-materialist) and individual factors on lack of access to a regular doctor. The 2000 public health survey in Scania is a cross-sectional study. A total of 13,715 persons answered a postal questionnaire, which is 59% of the random sample. A multilevel logistic regression model, with individuals at the first level and municipalities at the second, was performed. The effect (intra-class correlations, cross-level modification and odds ratios) of individual and municipality (social capital and health care district) factors on lack of access to a regular doctor was analysed using simulation method. The Deviance Information Criterion (DIC) was used as information criterion for the models. The second level municipality variance in lack of access to a regular doctor is substantial even in the final models with all individual and contextual variables included. The model that results in the largest reduction in DIC is the model including age, sex and individual social participation (which is a network aspect of social capital), but the models which include administrative and social capital second level factors also reduced the DIC values. This study suggests that both administrative health care district and social capital may partly explain the individual's self reported lack of access to a regular doctor.
NASA Astrophysics Data System (ADS)
Yang, Liang-Yi; Sun, Di-Hua; Zhao, Min; Cheng, Sen-Lin; Zhang, Geng; Liu, Hui
2018-03-01
In this paper, a new micro-cooperative driving car-following model is proposed to investigate the effect of continuous historical velocity difference information on traffic stability. The linear stability criterion of the new model is derived with linear stability theory and the results show that the unstable region in the headway-sensitivity space will be shrunk by taking the continuous historical velocity difference information into account. Through nonlinear analysis, the mKdV equation is derived to describe the traffic evolution behavior of the new model near the critical point. Via numerical simulations, the theoretical analysis results are verified and the results indicate that the continuous historical velocity difference information can enhance the stability of traffic flow in the micro-cooperative driving process.
NASA Astrophysics Data System (ADS)
Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei
2015-02-01
An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.
Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B
2016-09-01
Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.
The Reliability of Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Livingston, Samuel A.
The assumptions of the classical test-theory model are used to develop a theory of reliability for criterion-referenced measures which parallels that for norm-referenced measures. It is shown that the Spearman-Brown formula holds for criterion-referenced measures and that the criterion-referenced reliability coefficient can be used to correct…
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
TESTING NONSTANDARD COSMOLOGICAL MODELS WITH SNLS3 SUPERNOVA DATA AND OTHER COSMOLOGICAL PROBES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Zhengxiang; Yu Hongwei; Wu Puxun, E-mail: hwyu@hunnu.edu.cn
2012-01-10
We investigate the implications for some nonstandard cosmological models using data from the first three years of the Supernova Legacy Survey (SNLS3), assuming a spatially flat universe. A comparison between the constraints from the SNLS3 and those from other SN Ia samples, such as the ESSENCE, Union2, SDSS-II, and Constitution samples, is given and the effects of different light-curve fitters are considered. We find that analyzing SNe Ia with SALT2 or SALT or SiFTO can give consistent results and the tensions between different data sets and different light-curve fitters are obvious for fewer-free-parameters models. At the same time, we alsomore » study the constraints from SNLS3 along with data from the cosmic microwave background and the baryonic acoustic oscillations (CMB/BAO), and the latest Hubble parameter versus redshift (H(z)). Using model selection criteria such as {chi}{sup 2}/dof, goodness of fit, Akaike information criterion, and Bayesian information criterion, we find that, among all the cosmological models considered here ({Lambda}CDM, constant w, varying w, Dvali-Gabadadze-Porrati (DGP), modified polytropic Cardassian, and the generalized Chaplygin gas), the flat DGP is favored by SNLS3 alone. However, when additional CMB/BAO or H(z) constraints are included, this is no longer the case, and the flat {Lambda}CDM becomes preferred.« less
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Data mining in soft computing framework: a survey.
Mitra, S; Pal, S K; Mitra, P
2002-01-01
The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included.
Mixture of autoregressive modeling orders and its implication on single trial EEG classification
Atyabi, Adham; Shic, Frederick; Naples, Adam
2016-01-01
Autoregressive (AR) models are of commonly utilized feature types in Electroencephalogram (EEG) studies due to offering better resolution, smoother spectra and being applicable to short segments of data. Identifying correct AR’s modeling order is an open challenge. Lower model orders poorly represent the signal while higher orders increase noise. Conventional methods for estimating modeling order includes Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Final Prediction Error (FPE). This article assesses the hypothesis that appropriate mixture of multiple AR orders is likely to better represent the true signal compared to any single order. Better spectral representation of underlying EEG patterns can increase utility of AR features in Brain Computer Interface (BCI) systems by increasing timely & correctly responsiveness of such systems to operator’s thoughts. Two mechanisms of Evolutionary-based fusion and Ensemble-based mixture are utilized for identifying such appropriate mixture of modeling orders. The classification performance of the resultant AR-mixtures are assessed against several conventional methods utilized by the community including 1) A well-known set of commonly used orders suggested by the literature, 2) conventional order estimation approaches (e.g., AIC, BIC and FPE), 3) blind mixture of AR features originated from a range of well-known orders. Five datasets from BCI competition III that contain 2, 3 and 4 motor imagery tasks are considered for the assessment. The results indicate superiority of Ensemble-based modeling order mixture and evolutionary-based order fusion methods within all datasets. PMID:28740331
Mazerolle, M.J.
2006-01-01
In ecology, researchers frequently use observational studies to explain a given pattern, such as the number of individuals in a habitat patch, with a large number of explanatory (i.e., independent) variables. To elucidate such relationships, ecologists have long relied on hypothesis testing to include or exclude variables in regression models, although the conclusions often depend on the approach used (e.g., forward, backward, stepwise selection). Though better tools have surfaced in the mid 1970's, they are still underutilized in certain fields, particularly in herpetology. This is the case of the Akaike information criterion (AIC) which is remarkably superior in model selection (i.e., variable selection) than hypothesis-based approaches. It is simple to compute and easy to understand, but more importantly, for a given data set, it provides a measure of the strength of evidence for each model that represents a plausible biological hypothesis relative to the entire set of models considered. Using this approach, one can then compute a weighted average of the estimate and standard error for any given variable of interest across all the models considered. This procedure, termed model-averaging or multimodel inference, yields precise and robust estimates. In this paper, I illustrate the use of the AIC in model selection and inference, as well as the interpretation of results analysed in this framework with two real herpetological data sets. The AIC and measures derived from it is should be routinely adopted by herpetologists. ?? Koninklijke Brill NV 2006.
Energy Criterion for the Spectral Stability of Discrete Breathers.
Kevrekidis, Panayotis G; Cuevas-Maraver, Jesús; Pelinovsky, Dmitry E
2016-08-26
Discrete breathers are ubiquitous structures in nonlinear anharmonic models ranging from the prototypical example of the Fermi-Pasta-Ulam model to Klein-Gordon nonlinear lattices, among many others. We propose a general criterion for the emergence of instabilities of discrete breathers analogous to the well-established Vakhitov-Kolokolov criterion for solitary waves. The criterion involves the change of monotonicity of the discrete breather's energy as a function of the breather frequency. Our analysis suggests and numerical results corroborate that breathers with increasing (decreasing) energy-frequency dependence are generically unstable in soft (hard) nonlinear potentials.
Criterion learning in rule-based categorization: Simulation of neural mechanism and new data
Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd
2015-01-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349
Criterion learning in rule-based categorization: simulation of neural mechanism and new data.
Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd
2015-04-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.
Bayesian transformation cure frailty models with multivariate failure time data.
Yin, Guosheng
2008-12-10
We propose a class of transformation cure frailty models to accommodate a survival fraction in multivariate failure time data. Established through a general power transformation, this family of cure frailty models includes the proportional hazards and the proportional odds modeling structures as two special cases. Within the Bayesian paradigm, we obtain the joint posterior distribution and the corresponding full conditional distributions of the model parameters for the implementation of Gibbs sampling. Model selection is based on the conditional predictive ordinate statistic and deviance information criterion. As an illustration, we apply the proposed method to a real data set from dentistry.
NASA Astrophysics Data System (ADS)
Li, Xin; Tang, Li; Lin, Hai-Nan
2017-05-01
We compare six models (including the baryonic model, two dark matter models, two modified Newtonian dynamics models and one modified gravity model) in accounting for galaxy rotation curves. For the dark matter models, we assume NFW profile and core-modified profile for the dark halo, respectively. For the modified Newtonian dynamics models, we discuss Milgrom’s MOND theory with two different interpolation functions, the standard and the simple interpolation functions. For the modified gravity, we focus on Moffat’s MSTG theory. We fit these models to the observed rotation curves of 9 high-surface brightness and 9 low-surface brightness galaxies. We apply the Bayesian Information Criterion and the Akaike Information Criterion to test the goodness-of-fit of each model. It is found that none of the six models can fit all the galaxy rotation curves well. Two galaxies can be best fitted by the baryonic model without involving nonluminous dark matter. MOND can fit the largest number of galaxies, and only one galaxy can be best fitted by the MSTG model. Core-modified model fits about half the LSB galaxies well, but no HSB galaxies, while the NFW model fits only a small fraction of HSB galaxies but no LSB galaxies. This may imply that the oversimplified NFW and core-modified profiles cannot model the postulated dark matter haloes well. Supported by Fundamental Research Funds for the Central Universities (106112016CDJCR301206), National Natural Science Fund of China (11305181, 11547305 and 11603005), and Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (Y5KF181CJ1)
Optimal firing rate estimation
NASA Technical Reports Server (NTRS)
Paulin, M. G.; Hoffman, L. F.
2001-01-01
We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.
Shen, Chung-Wei; Chen, Yi-Hau
2015-10-01
Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Prado, R B; Novo, E M L M
2015-05-01
In this study multi-criteria modeling tools are applied to map the spatial distribution of drainage basin potential to pollute Barra Bonita Reservoir, São Paulo State, Brasil. Barra Bonita Reservoir Basin had undergone intense land use/land cover changes in the last decades, including the fast conversion from pasture into sugarcane. In this respect, this study answers to the lack of information about the variables (criteria) which affect the pollution potential of the drainage basin by building a Geographic Information System which provides their spatial distribution at sub-basin level. The GIS was fed by several data (geomorphology, pedology, geology, drainage network and rainfall) provided by public agencies. Landsat satellite images provided land use/land cover map for 2002. Ratings and weights of each criterion defined by specialists supported the modeling process. The results showed a wide variability in the pollution potential of different sub-basins according to the application of different criterion. If only land use is analyzed, for instance, less than 50% of the basin is classified as highly threatening to water quality and include sub basins located near the reservoir, indicating the importance of protection areas at the margins. Despite the subjectivity involved in the weighing processes, the multi-criteria analysis model allowed the simulation of scenarios which support rational land use polices at sub-basin level regarding the protection of water resources.
A criterion for maximum resin flow in composite materials curing process
NASA Astrophysics Data System (ADS)
Lee, Woo I.; Um, Moon-Kwang
1993-06-01
On the basis of Springer's resin flow model, a criterion for maximum resin flow in autoclave curing is proposed. Validity of the criterion was proved for two resin systems (Fiberite 976 and Hercules 3501-6 epoxy resin). The parameter required for the criterion can be easily estimated from the measured resin viscosity data. The proposed criterion can be used in establishing the proper cure cycle to ensure maximum resin flow and, thus, the maximum compaction.
Brookes, V J; Hernández-Jover, M; Neslo, R; Cowled, B; Holyoake, P; Ward, M P
2014-01-01
We describe stakeholder preference modelling using a combination of new and recently developed techniques to elicit criterion weights to incorporate into a multi-criteria decision analysis framework to prioritise exotic diseases for the pig industry in Australia. Australian pig producers were requested to rank disease scenarios comprising nine criteria in an online questionnaire. Parallel coordinate plots were used to visualise stakeholder preferences, which aided identification of two diverse groups of stakeholders - one group prioritised diseases with impacts on livestock, and the other group placed more importance on diseases with zoonotic impacts. Probabilistic inversion was used to derive weights for the criteria to reflect the values of each of these groups, modelling their choice using a weighted sum value function. Validation of weights against stakeholders' rankings for scenarios based on real diseases showed that the elicited criterion weights for the group who prioritised diseases with livestock impacts were a good reflection of their values, indicating that the producers were able to consistently infer impacts from the disease information in the scenarios presented to them. The highest weighted criteria for this group were attack rate and length of clinical disease in pigs, and market loss to the pig industry. The values of the stakeholders who prioritised zoonotic diseases were less well reflected by validation, indicating either that the criteria were inadequate to consistently describe zoonotic impacts, the weighted sum model did not describe stakeholder choice, or that preference modelling for zoonotic diseases should be undertaken separately from livestock diseases. Limitations of this study included sampling bias, as the group participating were not necessarily representative of all pig producers in Australia, and response bias within this group. The method used to elicit criterion weights in this study ensured value trade-offs between a range of potential impacts, and that the weights were implicitly related to the scale of measurement of disease criteria. Validation of the results of the criterion weights against real diseases - a step rarely used in MCDA - added scientific rigour to the process. The study demonstrated that these are useful techniques for elicitation of criterion weights for disease prioritisation by stakeholders who are not disease experts. Preference modelling for zoonotic diseases needs further characterisation in this context. Copyright © 2013 Elsevier B.V. All rights reserved.
Secret Sharing of a Quantum State.
Lu, He; Zhang, Zhen; Chen, Luo-Kan; Li, Zheng-Da; Liu, Chang; Li, Li; Liu, Nai-Le; Ma, Xiongfeng; Chen, Yu-Ao; Pan, Jian-Wei
2016-07-15
Secret sharing of a quantum state, or quantum secret sharing, in which a dealer wants to share a certain amount of quantum information with a few players, has wide applications in quantum information. The critical criterion in a threshold secret sharing scheme is confidentiality: with less than the designated number of players, no information can be recovered. Furthermore, in a quantum scenario, one additional critical criterion exists: the capability of sharing entangled and unknown quantum information. Here, by employing a six-photon entangled state, we demonstrate a quantum threshold scheme, where the shared quantum secrecy can be efficiently reconstructed with a state fidelity as high as 93%. By observing that any one or two parties cannot recover the secrecy, we show that our scheme meets the confidentiality criterion. Meanwhile, we also demonstrate that entangled quantum information can be shared and recovered via our setting, which shows that our implemented scheme is fully quantum. Moreover, our experimental setup can be treated as a decoding circuit of the five-qubit quantum error-correcting code with two erasure errors.
An optimal strategy for functional mapping of dynamic trait loci.
Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling
2010-02-01
As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.
A generalized nonlinear model-based mixed multinomial logit approach for crash data analysis.
Zeng, Ziqiang; Zhu, Wenbo; Ke, Ruimin; Ash, John; Wang, Yinhai; Xu, Jiuping; Xu, Xinxin
2017-02-01
The mixed multinomial logit (MNL) approach, which can account for unobserved heterogeneity, is a promising unordered model that has been employed in analyzing the effect of factors contributing to crash severity. However, its basic assumption of using a linear function to explore the relationship between the probability of crash severity and its contributing factors can be violated in reality. This paper develops a generalized nonlinear model-based mixed MNL approach which is capable of capturing non-monotonic relationships by developing nonlinear predictors for the contributing factors in the context of unobserved heterogeneity. The crash data on seven Interstate freeways in Washington between January 2011 and December 2014 are collected to develop the nonlinear predictors in the model. Thirteen contributing factors in terms of traffic characteristics, roadway geometric characteristics, and weather conditions are identified to have significant mixed (fixed or random) effects on the crash density in three crash severity levels: fatal, injury, and property damage only. The proposed model is compared with the standard mixed MNL model. The comparison results suggest a slight superiority of the new approach in terms of model fit measured by the Akaike Information Criterion (12.06 percent decrease) and Bayesian Information Criterion (9.11 percent decrease). The predicted crash densities for all three levels of crash severities of the new approach are also closer (on average) to the observations than the ones predicted by the standard mixed MNL model. Finally, the significance and impacts of the contributing factors are analyzed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Density dependence and risk of extinction in a small population of sea otters
Gerber, L.R.; Buenau, K.E.; VanBlaricom, G.
2004-01-01
Sea otters (Enhydra lutris (L.)) were hunted to extinction off the coast of Washington State early in the 20th century. A new population was established by translocations from Alaska in 1969 and 1970. The population, currently numbering at least 550 animals, A major threat to the population is the ongoing risk of majour oil spills in sea otter habitat. We apply population models to census and demographic data in order to evaluate the status of the population. We fit several density dependent models to test for density dependence and determine plausible values for the carrying capacity (K) by comparing model goodness of fit to an exponential model. Model fits were compared using Akaike Information Criterion (AIC). A significant negative relationship was found between the population growth rate and population size (r2=0.27, F=5.57, df=16, p<0.05), suggesting density dependence in Washington state sea otters. Information criterion statistics suggest that the model is the most parsimonious, followed closely by the logistic Beverton-Holt model. Values of K ranged from 612 to 759 with best-fit parameter estimates for the Beverton-Holt model including 0.26 for r and 612 for K. The latest (2001) population index count (555) puts the population at 87-92% of the estimated carrying capacity, above the suggested range for optimum sustainable population (OSP). Elasticity analysis was conducted to examine the effects of proportional changes in vital rates on the population growth rate (??). The elasticity values indicate the population is most sensitive to changes in survival rates (particularly adult survival).
Optimization of global model composed of radial basis functions using the term-ranking approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Numerical and Experimental Validation of a New Damage Initiation Criterion
NASA Astrophysics Data System (ADS)
Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.
2017-09-01
Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.
[GSH fermentation process modeling using entropy-criterion based RBF neural network model].
Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng
2008-05-01
The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
Validation of Cost-Effectiveness Criterion for Evaluating Noise Abatement Measures
DOT National Transportation Integrated Search
1999-04-01
This project will provide the Texas Department of Transportation (TxDOT)with information about the effects of the current cost-effectiveness criterion. The project has reviewed (1) the cost-effectiveness criteria used by other states, (2) the noise b...
A forecasting model for dengue incidence in the District of Gampaha, Sri Lanka.
Withanage, Gayan P; Viswakula, Sameera D; Nilmini Silva Gunawardena, Y I; Hapugoda, Menaka D
2018-04-24
Dengue is one of the major health problems in Sri Lanka causing an enormous social and economic burden to the country. An accurate early warning system can enhance the efficiency of preventive measures. The aim of the study was to develop and validate a simple accurate forecasting model for the District of Gampaha, Sri Lanka. Three time-series regression models were developed using monthly rainfall, rainy days, temperature, humidity, wind speed and retrospective dengue incidences over the period January 2012 to November 2015 for the District of Gampaha, Sri Lanka. Various lag times were analyzed to identify optimum forecasting periods including interactions of multiple lags. The models were validated using epidemiological data from December 2015 to November 2017. Prepared models were compared based on Akaike's information criterion, Bayesian information criterion and residual analysis. The selected model forecasted correctly with mean absolute errors of 0.07 and 0.22, and root mean squared errors of 0.09 and 0.28, for training and validation periods, respectively. There were no dengue epidemics observed in the district during the training period and nine outbreaks occurred during the forecasting period. The proposed model captured five outbreaks and correctly rejected 14 within the testing period of 24 months. The Pierce skill score of the model was 0.49, with a receiver operating characteristic of 86% and 92% sensitivity. The developed weather based forecasting model allows warnings of impending dengue outbreaks and epidemics in advance of one month with high accuracy. Depending upon climatic factors, the previous month's dengue cases had a significant effect on the dengue incidences of the current month. The simple, precise and understandable forecasting model developed could be used to manage limited public health resources effectively for patient management, vector surveillance and intervention programmes in the district.
Monitoring Species of Concern Using Noninvasive Genetic Sampling and Capture-Recapture Methods
2016-11-01
ABBREVIATIONS AICc Akaike’s Information Criterion with small sample size correction AZGFD Arizona Game and Fish Department BMGR Barry M. Goldwater...MNKA Minimum Number Known Alive N Abundance Ne Effective Population Size NGS Noninvasive Genetic Sampling NGS-CR Noninvasive Genetic...parameter estimates from capture-recapture models require sufficient sample sizes , capture probabilities and low capture biases. For NGS-CR, sample
Bayesian Decision Tree for the Classification of the Mode of Motion in Single-Molecule Trajectories
Türkcan, Silvan; Masson, Jean-Baptiste
2013-01-01
Membrane proteins move in heterogeneous environments with spatially (sometimes temporally) varying friction and with biochemical interactions with various partners. It is important to reliably distinguish different modes of motion to improve our knowledge of the membrane architecture and to understand the nature of interactions between membrane proteins and their environments. Here, we present an analysis technique for single molecule tracking (SMT) trajectories that can determine the preferred model of motion that best matches observed trajectories. The method is based on Bayesian inference to calculate the posteriori probability of an observed trajectory according to a certain model. Information theory criteria, such as the Bayesian information criterion (BIC), the Akaike information criterion (AIC), and modified AIC (AICc), are used to select the preferred model. The considered group of models includes free Brownian motion, and confined motion in 2nd or 4th order potentials. We determine the best information criteria for classifying trajectories. We tested its limits through simulations matching large sets of experimental conditions and we built a decision tree. This decision tree first uses the BIC to distinguish between free Brownian motion and confined motion. In a second step, it classifies the confining potential further using the AIC. We apply the method to experimental Clostridium Perfingens -toxin (CPT) receptor trajectories to show that these receptors are confined by a spring-like potential. An adaptation of this technique was applied on a sliding window in the temporal dimension along the trajectory. We applied this adaptation to experimental CPT trajectories that lose confinement due to disaggregation of confining domains. This new technique adds another dimension to the discussion of SMT data. The mode of motion of a receptor might hold more biologically relevant information than the diffusion coefficient or domain size and may be a better tool to classify and compare different SMT experiments. PMID:24376584
NASA Technical Reports Server (NTRS)
Stothers, Richard B.; Chin, Chao-wen
1999-01-01
Interior layers of stars that have been exposed by surface mass loss reveal aspects of their chemical and convective histories that are otherwise inaccessible to observation. It must be significant that the surface hydrogen abundances of luminous blue variables (LBVs) show a remarkable uniformity, specifically X(sub surf) = 0.3 - 0.4, while those of hydrogen-poor Wolf-Rayet (WN) stars fall, almost without exception, below these values, ranging down to X(sub surf) = 0. According to our stellar model calculations, most LBVs are post-red-supergiant objects in a late blue phase of dynamical instability, and most hydrogen-poor WN stars are their immediate descendants. If this is so, stellar models constructed with the Schwarzschild (temperature-gradient) criterion for convection account well for the observed hydrogen abundances, whereas models built with the Ledoux (density-gradient) criterion fail. At the brightest luminosities, the observed hydrogen abundances of LBVs are too large to be explained by any of our highly evolved stellar models, but these LBVs may occupy transient blue loops that exist during an earlier phase of dynamical instability when the star first becomes a yellow supergiant. Independent evidence concerning the criterion for convection, which is based mostly on traditional color distributions of less massive supergiants on the Hertzsprung-Russell diagram, tends to favor the Ledoux criterion. It is quite possible that the true criterion for convection changes over from something like the Ledoux criterion to something like the Schwarzschild criterion as the stellar mass increases.
Spatial effect of new municipal solid waste landfill siting using different guidelines.
Ahmad, Siti Zubaidah; Ahamad, Mohd Sanusi S; Yusoff, Mohd Suffian
2014-01-01
Proper implementation of landfill siting with the right regulations and constraints can prevent undesirable long-term effects. Different countries have respective guidelines on criteria for new landfill sites. In this article, we perform a comparative study of municipal solid waste landfill siting criteria stated in the policies and guidelines of eight different constitutional bodies from Malaysia, Australia, India, U.S.A., Europe, China and the Middle East, and the World Bank. Subsequently, a geographic information system (GIS) multi-criteria evaluation model was applied to determine new suitable landfill sites using different criterion parameters using a constraint mapping technique and weighted linear combination. Application of Macro Modeler provided in the GIS-IDRISI Andes software helps in building and executing multi-step models. In addition, the analytic hierarchy process technique was included to determine the criterion weight of the decision maker's preferences as part of the weighted linear combination procedure. The differences in spatial results of suitable sites obtained signifies that dissimilarity in guideline specifications and requirements will have an effect on the decision-making process.
1978-09-01
iE ARI TECHNICAL REPORT S~ TR-78-A31 M CCriterion-Reforencod Loasurement In the Army: Development of a Research-Based, Practical, Test Construction ...conducted to develop a Criterion- 1 Referenced Tests (CRTs) Construction Manual. Major accomplishments were the preparation of a written review of the...survey of the literature on Criterion-Referenced Testing’ conducted in order to provide an information base for development of the CRT Construction
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Chen, Liang-Hsuan; Hsueh, Chan-Ching
2007-06-01
Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.
Power-law ansatz in complex systems: Excessive loss of information.
Tsai, Sun-Ting; Chang, Chin-De; Chang, Ching-Hao; Tsai, Meng-Xue; Hsu, Nan-Jung; Hong, Tzay-Ming
2015-12-01
The ubiquity of power-law relations in empirical data displays physicists' love of simple laws and uncovering common causes among seemingly unrelated phenomena. However, many reported power laws lack statistical support and mechanistic backings, not to mention discrepancies with real data are often explained away as corrections due to finite size or other variables. We propose a simple experiment and rigorous statistical procedures to look into these issues. Making use of the fact that the occurrence rate and pulse intensity of crumple sound obey a power law with an exponent that varies with material, we simulate a complex system with two driving mechanisms by crumpling two different sheets together. The probability function of the crumple sound is found to transit from two power-law terms to a bona fide power law as compaction increases. In addition to showing the vicinity of these two distributions in the phase space, this observation nicely demonstrates the effect of interactions to bring about a subtle change in macroscopic behavior and more information may be retrieved if the data are subject to sorting. Our analyses are based on the Akaike information criterion that is a direct measurement of information loss and emphasizes the need to strike a balance between model simplicity and goodness of fit. As a show of force, the Akaike information criterion also found the Gutenberg-Richter law for earthquakes and the scale-free model for a brain functional network, a two-dimensional sandpile, and solar flare intensity to suffer an excessive loss of information. They resemble more the crumpled-together ball at low compactions in that there appear to be two driving mechanisms that take turns occurring.
Analysis of significant factors for dengue fever incidence prediction.
Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak
2016-04-16
Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.
Value and role of intensive care unit outcome prediction models in end-of-life decision making.
Barnato, Amber E; Angus, Derek C
2004-07-01
In the United States, intensive care unit (ICU) admission at the end of life is commonplace. What is the value and role of ICU mortality prediction models for informing the utility of ICU care?In this article, we review the history, statistical underpinnings,and current deployment of these models in clinical care. We conclude that the use of outcome prediction models to ration care that is unlikely to provide an expected benefit is hampered by imperfect performance, the lack of real-time availability, failure to consider functional outcomes beyond survival, and physician resistance to the use of probabilistic information when death is guaranteed by the decision it informs. Among these barriers, the most important technical deficiency is the lack of automated information systems to provide outcome predictions to decision makers, and the most important research and policy agenda is to understand and address our national ambivalence toward rationing care based on any criterion.
Bayesian Fusion of Color and Texture Segmentations
NASA Technical Reports Server (NTRS)
Manduchi, Roberto
2000-01-01
In many applications one would like to use information from both color and texture features in order to segment an image. We propose a novel technique to combine "soft" segmentations computed for two or more features independently. Our algorithm merges models according to a mean entropy criterion, and allows to choose the appropriate number of classes for the final grouping. This technique also allows to improve the quality of supervised classification based on one feature (e.g. color) by merging information from unsupervised segmentation based on another feature (e.g., texture.)
Large Area Crop Inventory Experiment (LACIE). YES phase 1 yield feasibility report
NASA Technical Reports Server (NTRS)
1977-01-01
The author has identified the following significant results. Each state model was separately evaluated to determine if a projected performance to the country level would satisfy a 90/90 criterion. All state models, except the North Dakota and Kansas models, satisfied that criterion both for district estimates aggregated to the state level and for state estimates directly from the models. In addition to the tests of the 90/90 criterion, the models were examined for their ability to adequately respond to fluctuations in weather. This portion of the analysis was based on a subjective interpretation of values of certain description statistics. As a result, 10 of the 12 models were judged to respond inadequately to variation in weather-related variables.
NASA Astrophysics Data System (ADS)
Ushijima, T.; Yeh, W.
2013-12-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.
New criteria for isotropic and textured metals
NASA Astrophysics Data System (ADS)
Cazacu, Oana
2018-05-01
In this paper a isotropic criterion expressed in terms of both invariants of the stress deviator, J2 and J3 is proposed. This criterion involves a unique parameter, α, which depends only on the ratio between the yield stresses in uniaxial tension and pure shear. If this parameter is zero, the von Mises yield criterion is recovered; if a is positive the yield surface is interior to the von Mises yield surface whereas when a is negative, the new yield surface is exterior to it. Comparison with polycrystalline calculations using Taylor-Bishop-Hill model [1] for randomly oriented face-centered (FCC) polycrystalline metallic materials show that this new criterion captures well the numerical yield points. Furthermore, the criterion reproduces well yielding under combined tension-shear loadings for a variety of isotropic materials. An extension of this isotropic yield criterion such as to account for orthotropy in yielding is developed using the generalized invariants approach of Cazacu and Barlat [2]. This new orthotropic criterion is general and applicable to three-dimensional stress states. The procedure for the identification of the material parameters is outlined. Illustration of the predictive capabilities of the new orthotropic is demonstrated through comparison between the model predictions and data on aluminum sheet samples.
Barigye, S J; Marrero-Ponce, Y; Martínez López, Y; Martínez Santiago, O; Torrens, F; García Domenech, R; Galvez, J
2013-01-01
Versatile event-based approaches for the definition of novel information theory-based indices (IFIs) are presented. An event in this context is the criterion followed in the "discovery" of molecular substructures, which in turn serve as basis for the construction of the generalized incidence and relations frequency matrices, Q and F, respectively. From the resultant F, Shannon's, mutual, conditional and joint entropy-based IFIs are computed. In previous reports, an event named connected subgraphs was presented. The present study is an extension of this notion, in which we introduce other events, namely: terminal paths, vertex path incidence, quantum subgraphs, walks of length k, Sach's subgraphs, MACCs, E-state and substructure fingerprints and, finally, Ghose and Crippen atom-types for hydrophobicity and refractivity. Moreover, we define magnitude-based IFIs, introducing the use of the magnitude criterion in the definition of mutual, conditional and joint entropy-based IFIs. We also discuss the use of information-theoretic parameters as a measure of the dissimilarity of codified structural information of molecules. Finally, a comparison of the statistics for QSPR models obtained with the proposed IFIs and DRAGON's molecular descriptors for two physicochemical properties log P and log K of 34 derivatives of 2-furylethylenes demonstrates similar to better predictive ability than the latter.
Thermal induced flow oscillations in heat exchangers for supercritical fluids
NASA Technical Reports Server (NTRS)
Friedly, J. C.; Manganaro, J. L.; Krueger, P. G.
1972-01-01
Analytical model has been developed to predict possible unstable behavior in supercritical heat exchangers. From complete model, greatly simplified stability criterion is derived. As result of this criterion, stability of heat exchanger system can be predicted in advance.
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network
Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng
2016-01-01
Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386
Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.
Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng
2016-10-13
Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.
May, Michael R; Moore, Brian R
2016-11-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified [Formula: see text] of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers-in order to clarify whether these methods can make reliable inferences from empirical datasets-and to theoretical biologists-in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.
May, Michael R.; Moore, Brian R.
2016-01-01
Evolutionary biologists have long been fascinated by the extreme differences in species numbers across branches of the Tree of Life. This has motivated the development of statistical methods for detecting shifts in the rate of lineage diversification across the branches of phylogenic trees. One of the most frequently used methods, MEDUSA, explores a set of diversification-rate models, where each model assigns branches of the phylogeny to a set of diversification-rate categories. Each model is first fit to the data, and the Akaike information criterion (AIC) is then used to identify the optimal diversification model. Surprisingly, the statistical behavior of this popular method is uncharacterized, which is a concern in light of: (1) the poor performance of the AIC as a means of choosing among models in other phylogenetic contexts; (2) the ad hoc algorithm used to visit diversification models, and; (3) errors that we reveal in the likelihood function used to fit diversification models to the phylogenetic data. Here, we perform an extensive simulation study demonstrating that MEDUSA (1) has a high false-discovery rate (on average, spurious diversification-rate shifts are identified ≈30% of the time), and (2) provides biased estimates of diversification-rate parameters. Understanding the statistical behavior of MEDUSA is critical both to empirical researchers—in order to clarify whether these methods can make reliable inferences from empirical datasets—and to theoretical biologists—in order to clarify the specific problems that need to be solved in order to develop more reliable approaches for detecting shifts in the rate of lineage diversification. [Akaike information criterion; extinction; lineage-specific diversification rates; phylogenetic model selection; speciation.] PMID:27037081
N'gattia, A K; Coulibaly, D; Nzussouo, N Talla; Kadjo, H A; Chérif, D; Traoré, Y; Kouakou, B K; Kouassi, P D; Ekra, K D; Dagnan, N S; Williams, T; Tiembré, I
2016-09-13
In temperate regions, influenza epidemics occur in the winter and correlate with certain climatological parameters. In African tropical regions, the effects of climatological parameters on influenza epidemics are not well defined. This study aims to identify and model the effects of climatological parameters on seasonal influenza activity in Abidjan, Cote d'Ivoire. We studied the effects of weekly rainfall, humidity, and temperature on laboratory-confirmed influenza cases in Abidjan from 2007 to 2010. We used the Box-Jenkins method with the autoregressive integrated moving average (ARIMA) process to create models using data from 2007-2010 and to assess the predictive value of best model on data from 2011 to 2012. The weekly number of influenza cases showed significant cross-correlation with certain prior weeks for both rainfall, and relative humidity. The best fitting multivariate model (ARIMAX (2,0,0) _RF) included the number of influenza cases during 1-week and 2-weeks prior, and the rainfall during the current week and 5-weeks prior. The performance of this model showed an increase of >3 % for Akaike Information Criterion (AIC) and 2.5 % for Bayesian Information Criterion (BIC) compared to the reference univariate ARIMA (2,0,0). The prediction of the weekly number of influenza cases during 2011-2012 with the best fitting multivariate model (ARIMAX (2,0,0) _RF), showed that the observed values were within the 95 % confidence interval of the predicted values during 97 of 104 weeks. Including rainfall increases the performances of fitted and predicted models. The timing of influenza in Abidjan can be partially explained by rainfall influence, in a setting with little change in temperature throughout the year. These findings can help clinicians to anticipate influenza cases during the rainy season by implementing preventive measures.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
Load-Based Lower Neck Injury Criteria for Females from Rear Impact from Cadaver Experiments.
Yoganandan, Narayan; Pintar, Frank A; Banerjee, Anjishnu
2017-05-01
The objectives of this study were to derive lower neck injury metrics/criteria and injury risk curves for the force, moment, and interaction criterion in rear impacts for females. Biomechanical data were obtained from previous intact and isolated post mortem human subjects and head-neck complexes subjected to posteroanterior accelerative loading. Censored data were used in the survival analysis model. The primary shear force, sagittal bending moment, and interaction (lower neck injury criterion, LN ic ) metrics were significant predictors of injury. The most optimal distribution was selected (Weibulll, log normal, or log logistic) using the Akaike information criterion according to the latest ISO recommendations for deriving risk curves. The Kolmogorov-Smirnov test was used to quantify robustness of the assumed parametric model. The intercepts for the interaction index were extracted from the primary risk curves. Normalized confidence interval sizes (NCIS) were reported at discrete probability levels, along with the risk curves and 95% confidence intervals. The mean force of 214 N, moment of 54 Nm, and 0.89 LN ic were associated with a five percent probability of injury. The NCIS for these metrics were 0.90, 0.95, and 0.85. These preliminary results can be used as a first step in the definition of lower neck injury criteria for women under posteroanterior accelerative loading in crashworthiness evaluations.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
NASA Technical Reports Server (NTRS)
Wu, S. T.
1974-01-01
The responses of the solar atmosphere due to an outward propagation shock are examined by employing the Lax-Wendroff method to solve the set of nonlinear partial differential equations in the model of the solar atmosphere. It is found that this theoretical model can be used to explain the solar phenomena of surge and spray. A criterion to discriminate the surge and spray is established and detailed information concerning the density, velocity, and temperature distribution with respect to the height and time is presented. The complete computer program is also included.
NASA Astrophysics Data System (ADS)
Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.
2016-08-01
Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.
14 CFR 255.4 - Display of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and the weight given to each criterion and the specifications used by the system's programmers in constructing the algorithm. (c) Systems shall not use any factors directly or indirectly relating to carrier...” connecting flights; and (iv) The weight given to each criterion in paragraphs (c)(3)(ii) and (iii) of this...
Decision Criterion Dynamics in Animals Performing an Auditory Detection Task
Mill, Robert W.; Alves-Pinto, Ana; Sumner, Christian J.
2014-01-01
Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding “yes” during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans) and in other psychological domains (e.g., vision, memory). PMID:25485733
Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.
Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H
2018-01-01
Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.
NASA Astrophysics Data System (ADS)
Ning, Fangkun; Jia, Weitao; Hou, Jian; Chen, Xingrui; Le, Qichi
2018-05-01
Various fracture criteria, especially Johnson and Cook (J-C) model and (normalized) Cockcroft and Latham (C-L) criterion were contrasted and discussed. Based on normalized C-L criterion, adopted in this paper, FE simulation was carried out and hot rolling experiments under temperature range of 200 °C–350 °C, rolling reduction rate of 25%–40% and rolling speed from 7–21 r/min was implemented. The microstructure was observed by optical microscope and damage values of simulation results were contrasted with the length of cracks on diverse parameters. The results show that the plate generated less edge cracks and the microstructure emerged slight shear bands and fine dynamic recrystallization grains rolled at 350 °C, 40% reduction and 14 r/min. The edge cracks pre-criterion model was obtained combined with Zener-Hollomon equation and deformation activation energy.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
An Improvement of the Anisotropy and Formability Predictions of Aluminum Alloy Sheets
NASA Astrophysics Data System (ADS)
Banabic, D.; Comsa, D. S.; Jurco, P.; Wagner, S.; Vos, M.
2004-06-01
The paper presents an yield criterion for orthotropic sheet metals and its implementation in a theoretical model in order to calculate the Forming Limit Curves. The proposed yield criterion has been validated for two aluminum alloys: AA3103-0 and AA5182-0, respectively. The biaxial tensile test of cross specimens has been used for the determination of the experimental yield locus. The new yield criterion has been implemented in the Marciniak-Kuczynski model for the calculus of limit strains. The calculated Forming Limit Curves have been compared with the experimental ones, determined by frictionless test: bulge test, plane strain test and uniaxial tensile test. The predicted Forming Limit Curves using the new yield criterion are in good agreement with the experimental ones.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Aggregation models on hypergraphs
NASA Astrophysics Data System (ADS)
Alberici, Diego; Contucci, Pierluigi; Mingione, Emanuele; Molari, Marco
2017-01-01
Following a newly introduced approach by Rasetti and Merelli we investigate the possibility to extract topological information about the space where interacting systems are modelled. From the statistical datum of their observable quantities, like the correlation functions, we show how to reconstruct the activities of their constitutive parts which embed the topological information. The procedure is implemented on a class of polymer models on hypergraphs with hard-core interactions. We show that the model fulfils a set of iterative relations for the partition function that generalise those introduced by Heilmann and Lieb for the monomer-dimer case. After translating those relations into structural identities for the correlation functions we use them to test the precision and the robustness of the inverse problem. Finally the possible presence of a further interaction of peer-to-peer type is considered and a criterion to discover it is identified.
The limited use of the fluency heuristic: Converging evidence across different procedures.
Pohl, Rüdiger F; Erdfelder, Edgar; Michalkiewicz, Martha; Castela, Marta; Hilbig, Benjamin E
2016-10-01
In paired comparisons based on which of two objects has the larger criterion value, decision makers could use the subjectively experienced difference in retrieval fluency of the objects as a cue. According to the fluency heuristic (FH) theory, decision makers use fluency-as indexed by recognition speed-as the only cue for pairs of recognized objects, and infer that the object retrieved more speedily has the larger criterion value (ignoring all other cues and information). Model-based analyses, however, have previously revealed that only a small portion of such inferences are indeed based on fluency alone. In the majority of cases, other information enters the decision process. However, due to the specific experimental procedures, the estimates of FH use are potentially biased: Some procedures may have led to an overestimated and others to an underestimated, or even to actually reduced, FH use. In the present article, we discuss and test the impacts of such procedural variations by reanalyzing 21 data sets. The results show noteworthy consistency across the procedural variations revealing low FH use. We discuss potential explanations and implications of this finding.
Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang
2014-04-01
A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.
Bayesian Evaluation of Dynamical Soil Carbon Models Using Soil Carbon Flux Data
NASA Astrophysics Data System (ADS)
Xie, H. W.; Romero-Olivares, A.; Guindani, M.; Allison, S. D.
2017-12-01
2016 was Earth's hottest year in the modern temperature record and the third consecutive record-breaking year. As the planet continues to warm, temperature-induced changes in respiration rates of soil microbes could reduce the amount of carbon sequestered in the soil organic carbon (SOC) pool, one of the largest terrestrial stores of carbon. This would accelerate temperature increases. In order to predict the future size of the SOC pool, mathematical soil carbon models (SCMs) describing interactions between the biosphere and atmosphere are needed. SCMs must be validated before they can be chosen for predictive use. In this study, we check two SCMs called CON and AWB for consistency with observed data using Bayesian goodness of fit testing that can be used in the future to compare other models. We compare the fit of the models to longitudinal soil respiration data from a meta-analysis of soil heating experiments using a family of Bayesian goodness of fit metrics called information criteria (IC), including the Widely Applicable Information Criterion (WAIC), the Leave-One-Out Information Criterion (LOOIC), and the Log Pseudo Marginal Likelihood (LPML). These IC's take the entire posterior distribution into account, rather than just one outputted model fit line. A lower WAIC and LOOIC and larger LPML indicate a better fit. We compare AWB and CON with fixed steady state model pool sizes. At equivalent SOC, dissolved organic carbon, and microbial pool sizes, CON always outperforms AWB quantitatively by all three IC's used. AWB monotonically improves in fit as we reduce the SOC steady state pool size while fixing all other pool sizes, and the same is almost true for CON. The AWB model with the lowest SOC is the best performing AWB model, while the CON model with the second lowest SOC is the best performing model. We observe that AWB displays more changes in slope sign and qualitatively displays more adaptive dynamics, which prevents AWB from being fully ruled out for predictive use, but based on IC's, CON is clearly the superior model for fitting the data. Hence, we demonstrate that Bayesian goodness of fit testing with information criteria helps us rigorously determine the consistency of models with data. Models that demonstrate their consistency to multiple data sets with our approach can then be selected for further refinement.
Interactive Reliability Model for Whisker-toughened Ceramics
NASA Technical Reports Server (NTRS)
Palko, Joseph L.
1993-01-01
Wider use of ceramic matrix composites (CMC) will require the development of advanced structural analysis technologies. The use of an interactive model to predict the time-independent reliability of a component subjected to multiaxial loads is discussed. The deterministic, three-parameter Willam-Warnke failure criterion serves as the theoretical basis for the reliability model. The strength parameters defining the model are assumed to be random variables, thereby transforming the deterministic failure criterion into a probabilistic criterion. The ability of the model to account for multiaxial stress states with the same unified theory is an improvement over existing models. The new model was coupled with a public-domain finite element program through an integrated design program. This allows a design engineer to predict the probability of failure of a component. A simple structural problem is analyzed using the new model, and the results are compared to existing models.
Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy
2015-01-01
Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a repeated measures logistic regression where SL and contrast were used to predict the likelihood of speech discrimination criterion. Infants were able to reach criterion for the /a-i/ contrast at statistically lower SLs when compared to /ba-da/. There were six infants who never reached criterion for /ba-da/ and one never reached criterion for /a-i/. The conditional probability of not reaching criterion by 70 dB SL was 0% for /a-i/ and 21% for /ba-da/. The predictive logistic regression model showed that children were more likely to discriminate the /a-i/ even when controlling for SL. Nearly all normal-hearing infants can demonstrate discrimination criterion of a vowel contrast at 60 dB SL, while a level of ≥70 dB SL may be needed to allow all infants to demonstrate discrimination criterion of a difficult consonant contrast. American Academy of Audiology.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion
NASA Astrophysics Data System (ADS)
Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel
2011-08-01
The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.
Saha, Tulshi D; Chou, S Patricia; Grant, Bridget F
2006-07-01
Item response theory (IRT) was used to determine whether the DSM-IV diagnostic criteria for alcohol abuse and dependence are arrayed along a continuum of severity. Data came from a large nationally representative sample of the US population, 18 years and older. A two-parameter logistic IRT model was used to determine the severity and discrimination of each DSM-IV criterion. Differential criterion functioning (DCF) was also assessed across subgroups of the population defined by sex, age and race-ethnicity. All DSM-IV alcohol abuse and dependence criteria, except alcohol-related legal problems, formed a continuum of alcohol use disorder severity. Abuse and dependence criteria did not consistently tap the mildest or more severe end of the continuum respectively, and several criteria were identified as potentially redundant. The drinking in larger amounts or for longer than intended dependence criterion had the greatest discrimination and lowest severity than any other criterion. Although several criteria were found to function differentially between subgroups defined in terms of sex and age, there was evidence that the generalizability and validity of the criterion forming the continuum remained intact at the test score level. DSM-IV diagnostic criteria for alcohol abuse and dependence form a continuum of severity, calling into question the abuse-dependence distinction in the DSM-IV and the interpretation of abuse as a milder disorder than dependence. The criteria tapped the more severe end of the alcohol use disorder continuum, highlighting the need to identify other criteria capturing the mild to intermediate range of the severity. The drinking larger amounts or longer than intended dependence criterion may be a bridging criterion between drinking patterns that incur risk of alcohol use disorder at the milder end of the continuum, with tolerance, withdrawal, impaired control and serious social and occupational dysfunction at the more severe end of the alcohol use disorder continuum. Future IRT and other dimensional analyses hold great promise in informing revisions to categorical classifications and constructing new dimensional classifications of alcohol use disorders based on the DSM and the ICD.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication. Copyright © 2013 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Oakland, Thomas
New strategies for evaluation criterion referenced measures (CRM) are discussed. These strategies examine the following issues: (1) the use of normed referenced measures (NRM) as CRM and then estimating the reliability and validity of such measures in terms of variance from an arbitrarily specified criterion score, (2) estimation of the…
Heuristic Bayesian segmentation for discovery of coexpressed genes within genomic regions.
Pehkonen, Petri; Wong, Garry; Törönen, Petri
2010-01-01
Segmentation aims to separate homogeneous areas from the sequential data, and plays a central role in data mining. It has applications ranging from finance to molecular biology, where bioinformatics tasks such as genome data analysis are active application fields. In this paper, we present a novel application of segmentation in locating genomic regions with coexpressed genes. We aim at automated discovery of such regions without requirement for user-given parameters. In order to perform the segmentation within a reasonable time, we use heuristics. Most of the heuristic segmentation algorithms require some decision on the number of segments. This is usually accomplished by using asymptotic model selection methods like the Bayesian information criterion. Such methods are based on some simplification, which can limit their usage. In this paper, we propose a Bayesian model selection to choose the most proper result from heuristic segmentation. Our Bayesian model presents a simple prior for the segmentation solutions with various segment numbers and a modified Dirichlet prior for modeling multinomial data. We show with various artificial data sets in our benchmark system that our model selection criterion has the best overall performance. The application of our method in yeast cell-cycle gene expression data reveals potential active and passive regions of the genome.
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
A new criterion for acquisition of nicotine self-administration in rats.
Peartree, Natalie A; Sanabria, Federico; Thiel, Kenneth J; Weber, Suzanne M; Cheung, Timothy H C; Neisewander, Janet L
2012-07-01
Acquisition of nicotine self-administration in rodents is relatively difficult to establish and measures of acquisition rate are sometimes confounded by manipulations used to facilitate the process. This study examined acquisition of nicotine self-administration without such manipulations and used mathematical modeling to define the criterion for acquisition. Rats were given 20 daily 2-h sessions occurring 6 days/week in chambers equipped with active and inactive levers. Each active lever press resulted in nicotine reinforcement (0-0.06 mg/kg, IV) and retraction of both levers for a 20-s time out, whereas inactive lever presses had no consequences. Acquisition was defined for individual rats by the higher likelihood of reinforcers obtained across sessions fitting a logistic over a constant function according to the corrected Akaike Information Criterion (AICc). For rats that acquired self-administration, an AICc-based multi-model comparison demonstrated that the asymptote (highest number of reinforcers/session) and mid-point of the acquisition curve (h; the number of sessions necessary to reach half the asymptote) varied by nicotine dose, with both exhibiting a negative relationship (the higher the dose, the lower number of reinforcers and the lower h). The modeling approach used in this study provides a way of defining acquisition of nicotine self-administration that takes advantage of all data from individual subjects and the procedure used is sensitive to dose differences in the absence of manipulations that influence acquisition (e.g., food restriction, prior food reinforcement, conditioned reinforcers). Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Estimating Pressure Reactivity Using Noninvasive Doppler-Based Systolic Flow Index.
Zeiler, Frederick A; Smielewski, Peter; Donnelly, Joseph; Czosnyka, Marek; Menon, David K; Ercole, Ari
2018-04-05
The study objective was to derive models that estimate the pressure reactivity index (PRx) using the noninvasive transcranial Doppler (TCD) based systolic flow index (Sx_a) and mean flow index (Mx_a), both based on mean arterial pressure, in traumatic brain injury (TBI). Using a retrospective database of 347 patients with TBI with intracranial pressure and TCD time series recordings, we derived PRx, Sx_a, and Mx_a. We first derived the autocorrelative structure of PRx based on: (A) autoregressive integrative moving average (ARIMA) modeling in representative patients, and (B) within sequential linear mixed effects (LME) models with various embedded ARIMA error structures for PRx for the entire population. Finally, we performed sequential LME models with embedded PRx ARIMA modeling to find the best model for estimating PRx using Sx_a and Mx_a. Model adequacy was assessed via normally distributed residual density. Model superiority was assessed via Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), log likelihood (LL), and analysis of variance testing between models. The most appropriate ARIMA structure for PRx in this population was (2,0,2). This was applied in sequential LME modeling. Two models were superior (employing random effects in the independent variables and intercept): (A) PRx ∼ Sx_a, and (B) PRx ∼ Sx_a + Mx_a. Correlation between observed and estimated PRx with these two models was: (A) 0.794 (p < 0.0001, 95% confidence interval (CI) = 0.788-0.799), and (B) 0.814 (p < 0.0001, 95% CI = 0.809-0.819), with acceptable agreement on Bland-Altman analysis. Through using linear mixed effects modeling and accounting for the ARIMA structure of PRx, one can estimate PRx using noninvasive TCD-based indices. We have described our first attempts at such modeling and PRx estimation, establishing the strong link between two aspects of cerebral autoregulation: measures of cerebral blood flow and those of pulsatile cerebral blood volume. Further work is required to validate.
Evaluation of Validity and Reliability for Hierarchical Scales Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2012-01-01
A latent variable modeling method is outlined, which accomplishes estimation of criterion validity and reliability for a multicomponent measuring instrument with hierarchical structure. The approach provides point and interval estimates for the scale criterion validity and reliability coefficients, and can also be used for testing composite or…
Multi-Informant Assessment of Temperament in Children with Externalizing Behavior Problems
ERIC Educational Resources Information Center
Copeland, William; Landry, Kerry; Stanger, Catherine; Hudziak, James J.
2004-01-01
We examined the criterion validity of parent and self-report versions of the Junior Temperament and Character Inventory (JTCI) in children with high levels of externalizing problems. The sample included 412 children (206 participants and 206 siblings) participating in a family study of attention and aggressive behavior problems. Criterion validity…
ERIC Educational Resources Information Center
Messick, Samuel
Cognitive styles--defined as information processing habits--should be considered as a criterion variable in the evaluation of instruction. Research findings identify the characteristics of different cognitive stles. Used in educational practice and evaluation, cognitive styles would be new process variables extending the assessment of mental…
The Validity of the Instructional Reading Level.
ERIC Educational Resources Information Center
Powell, William R.
Presented is a critical inquiry about the product of the informal reading inventory (IRI) and about some of the elements used in the process of determining that product. Recent developments on this topic are briefly reviewed. Questions are raised concerning what is a suitable criterion level for word recognition. The original criterion of 95…
Armour, Cherie; Layne, Christopher M; Naifeh, James A; Shevlin, Mark; Duraković-Belko, Elvira; Djapo, Nermin; Pynoos, Robert S; Elhai, Jon D
2011-01-01
Posttraumatic stress disorder's (PTSD) tripartite factor structure proposed by the DSM-IV is rarely empirically supported. Other four-factor models (King et al., 1998; Simms et al., 2002) have proven to better account for PTSD's latent structure; however, results regarding model superiority are conflicting. The current study assessed whether endorsement of PTSD's Criterion A2 would impact on the factorial invariance of the King et al. (1998) model. Participants were 1572 war-exposed Bosnian secondary students who were assessed two years following the 1992-1995 Bosnian conflict. The sample was grouped by those endorsing both parts of the DSM-IV Criterion A (A2 Group) and those endorsing only A1 (Non-A2 Group). The factorial invariance of the King et al. (1998) model was not supported between the A2 vs. Non-A2 Groups; rather, the groups significantly differed on all model parameters. The impact of removing A2 on the factor structure of King et al. (1998) PTSD model is discussed in light of the proposed removal of Criterion A2 for the DSM-V. Copyright © 2010 Elsevier Ltd. All rights reserved.
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
A generic bio-economic farm model for environmental and economic assessment of agricultural systems.
Janssen, Sander; Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K
2010-12-01
Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models.
A Generic Bio-Economic Farm Model for Environmental and Economic Assessment of Agricultural Systems
Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K.
2010-01-01
Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models. PMID:21113782
Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.
Jiang, Yuan; He, Yunxiao; Zhang, Heping
LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.
Estimation of submarine mass failure probability from a sequence of deposits with age dates
Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.
2013-01-01
The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
Territories typification technique with use of statistical models
NASA Astrophysics Data System (ADS)
Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.
2018-05-01
Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.
Information quantity and quality affect the realistic accuracy of personality judgment.
Letzring, Tera D; Wells, Shannon M; Funder, David C
2006-07-01
Triads of unacquainted college students interacted in 1 of 5 experimental conditions that manipulated information quantity (amount of information) and information quality (relevance of information to personality), and they then made judgments of each others' personalities. To determine accuracy, the authors compared the ratings of each judge to a broad-based accuracy criterion composed of personality ratings from 3 types of knowledgeable informants (the self, real-life acquaintances, and clinician-interviewers). Results supported the hypothesis that information quantity and quality would be positively related to objective knowledge about the targets and realistic accuracy. Interjudge consensus and self-other agreement followed a similar pattern. These findings are consistent with expectations based on models of the process of accurate judgment (D. C. Funder, 1995, 1999) and consensus (D. A. Kenny, 1994). Copyright 2006 APA, all rights reserved.
The Topp-Leone generalized Rayleigh cure rate model and its application
NASA Astrophysics Data System (ADS)
Nanthaprut, Pimwarat; Bodhisuwan, Winai; Patummasut, Mena
2017-11-01
Cure rate model is one of the survival analysis when model consider a proportion of the censored data. In clinical trials, the data represent time to recurrence of event or death of patients are used to improve the efficiency of treatments. Each dataset can be separated into two groups: censored and uncensored data. In this work, the new mixture cure rate model is introduced based on the Topp-Leone generalized Rayleigh distribution. The Bayesian approach is employed to estimate its parameters. In addition, a breast cancer dataset is analyzed for model illustration purpose. According to the deviance information criterion, the Topp-Leone generalized Rayleigh cure rate model shows better result than the Weibull and exponential cure rate models.
van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-08-07
Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.
Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter
2015-01-01
Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160
Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…
An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.
Mengoni, M; Ponthot, J P
2015-01-01
The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.
Wu, Nan; Yuan, Suomao; Liu, Jiaqi; Chen, Jun; Fei, Qi; Liu, Sen; Su, Xinlin; Wang, Shengru; Zhang, Jianguo; Li, Shugang; Wang, Yipeng; Qiu, Guixing; Wu, Zhihong
2014-10-01
A genetic association study of single nucleotide polymorphisms (SNPs) for the LMX1A gene with congenital scoliosis (CS) in the Chinese Han population. To determine whether LMX1A genetic polymorphisms are associated with susceptibility to CS. CS is a lateral curvature of the spine due to congenital vertebral defects, whose exact genetic cause has not been well established. The LMX1A gene was suggested as a potential human candidate gene for CS. However, no genetic study of LMX1A in CS has ever been reported. We genotyped 13 SNPs of the LMX1A gene in 154 patients with CS and 144 controls with matched sex and age. After conducting the Hardy-Weinberg equilibrium test, the data of 13 SNPs were analyzed by the allelic and genotypic association with logistic regression analysis. Furthermore, the genotype-phenotype association and haplotype association analysis were also performed. The 13 SNPs of the LMX1A gene met Hardy-Weinberg equilibrium in the controls, which was not in the cases. None of the allelic and genotypic frequencies of these SNPs showed significant difference between case and control groups (P > 0.05). However, the genotypic frequencies of rs1354510 and rs16841013 in the LMX1A gene were associated with CS predisposition in the unconditional logistic regression analysis (P = 0.02 and 0.018, respectively). Genotypic frequencies of 3 SNPs at rs6671290, rs1354510, and rs16841013 were found to exhibit significant differences between patients with CS with failure of formation and the healthy controls (P = 0.019, 0.007, and 0.006, respectively). Besides, in the model analysis by using unconditional logistic regression analysis, the optimized model for the 3 genotypic positive SNPs with failure of formation were rs6671290 (codominant; P = 0.025, Akaike information value = 316.6, Bayesian information criterion = 333.9), rs1354510 (overdominant; P = 0.0017, Akaike information value = 312.1, Bayesian information criterion = 325.9), and rsl6841013 (overdominant; P = 0.0016, Akaike information value = 311.1, Bayesian information criterion = 325), respectively. However, the haplotype distributions in the case group were not significantly different from those of the control group in the 3 haplotype blocks. To our knowledge, this is the first study to identify that the SNPs of the LMX1A gene might be associated with the susceptibility to CS and different clinical phenotypes of CS in the Chinese Han population. 4.
NASA Astrophysics Data System (ADS)
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
Combustion properties of Kraft Black Liquors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, W.J. Jr.; Hupa, M.
1993-04-01
In a previous study of the phenomena involved in the combustion of black liquor droplets a numerical model was developed. The model required certain black liquor specific combustion information which was then not currently available, and additional data were needed for evaluating the model. The overall objectives of the project reported here was to provide experimental data on key aspects of black liquor combustion, to interpret the data, and to put it into a form which would be useful for computational models for recovery boilers. The specific topics to be investigated were the volatiles and char carbon yields from pyrolysismore » of single black liquor droplets; a criterion for the onset of devolatilization and the accompanying rapid swelling; and the surface temperature of black liquor droplets during pyrolysis, combustion, and gasification. Additional information on the swelling characteristics of black liquor droplets was also obtained as part of the experiments conducted.« less
Aggregation models on hypergraphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alberici, Diego, E-mail: diego.alberici2@unibo.it; Contucci, Pierluigi, E-mail: pierluigi.contucci@unibo.it; Mingione, Emanuele, E-mail: emanuele.mingione2@unibo.it
2017-01-15
Following a newly introduced approach by Rasetti and Merelli we investigate the possibility to extract topological information about the space where interacting systems are modelled. From the statistical datum of their observable quantities, like the correlation functions, we show how to reconstruct the activities of their constitutive parts which embed the topological information. The procedure is implemented on a class of polymer models on hypergraphs with hard-core interactions. We show that the model fulfils a set of iterative relations for the partition function that generalise those introduced by Heilmann and Lieb for the monomer–dimer case. After translating those relations intomore » structural identities for the correlation functions we use them to test the precision and the robustness of the inverse problem. Finally the possible presence of a further interaction of peer-to-peer type is considered and a criterion to discover it is identified.« less
Physical mechanism and numerical simulation of the inception of the lightning upward leader
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qingmin; Lu Xinchang; Shi Wei
2012-12-15
The upward leader is a key physical process of the leader progression model of lightning shielding. The inception mechanism and criterion of the upward leader need further understanding and clarification. Based on leader discharge theory, this paper proposes the critical electric field intensity of the stable upward leader (CEFISUL) and characterizes it by the valve electric field intensity on the conductor surface, E{sub L}, which is the basis of a new inception criterion for the upward leader. Through numerical simulation under various physical conditions, we verified that E{sub L} is mainly related to the conductor radius, and data fitting yieldsmore » the mathematical expression of E{sub L}. We further establish a computational model for lightning shielding performance of the transmission lines based on the proposed CEFISUL criterion, which reproduces the shielding failure rate of typical UHV transmission lines. The model-based calculation results agree well with the statistical data from on-site operations, which show the effectiveness and validity of the CEFISUL criterion.« less
ERIC Educational Resources Information Center
Meredith, Keith E.; Sabers, Darrell L.
Data required for evaluating a Criterion Referenced Measurement (CRM) is described with a matrix. The information within the matrix consists of the "pass-fail" decisions of two CRMs. By differentially defining these two CRMs, different concepts of reliability and validity can be examined. Indices suggested for analyzing the matrix are listed with…
Water-Sediment Controversy in Setting Environmental Standards for Selenium
Steven J. Hamilton; A. Dennis Lemly
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 µg/L. This commentary will present information to support a national...
The Development of a Criterion Instrument for Counselor Selection.
ERIC Educational Resources Information Center
Remer, Rory; Sease, William
A measure of potential performance as a counselor is needed as an adjunct to the information presently employed in selection decisions. This article deals with one possible method of development of such a potential performance criterion and the steps taken, to date, in the attempt to validate it. It includes: the overall effectiveness of the…
ERIC Educational Resources Information Center
Tibbetts, Katherine A.; And Others
This paper describes the development of a criterion-referenced, performance-based measure of third grade reading comprehension. The primary purpose of the assessment is to contribute unique and valid information for use in the formative evaluation of a whole literacy program. A secondary purpose is to supplement other program efforts to…
ERIC Educational Resources Information Center
Hirschi, Andreas
2009-01-01
Interest differentiation and elevation are supposed to provide important information about a person's state of interest development, yet little is known about their development and criterion validity. The present study explored these constructs among a group of Swiss adolescents. Study 1 applied a cross-sectional design with 210 students in 11th…
A Humanistic Approach to Criterion Referenced Testing.
ERIC Educational Resources Information Center
Wilson, H. A.
Test construction is not the strictly logical process that we might wish it to be. This is particularly true in a large on-going project such as the National Assessment of Educational Progress (NAEP). Most of the really deep questions can only be answered by the exercise of well-informed human judgment. Criterion-referenced testing is still a term…
Changing the criterion for memory conformity in free recall and recognition.
Wright, Daniel B; Gabbert, Fiona; Memon, Amina; London, Kamala
2008-02-01
People's responses during memory studies are affected by what other people say. This memory conformity effect has been shown in both free recall and recognition. Here we examine whether accurate, inaccurate, and suggested answers are affected similarly when the response criterion is varied. In the first study, participants saw four pictures of detailed scenes and then discussed the content of these scenes with another participant who saw the same scenes, but with a couple of details changed. Participants were either told to recall everything they could and not to worry about making mistakes (lenient), or only to recall items if they were sure that they were accurate (strict). The strict instructions reduced the amount of inaccurate information reported that the other person suggested, but also reduced the number of accurate details recalled. In the second study, participants were shown a large set of faces and then their memory recognition was tested with a confederate on these and fillers. Here also, the criterion manipulation shifted both accurate and inaccurate responses, and those suggested by the confederate. The results are largely consistent with a shift in response criterion affecting accurate, inaccurate, and suggested information. In addition we varied the level of secrecy in the participants' responses. The effects of secrecy were complex and depended on the level of response criterion. Implications for interviewing eyewitnesses and line-ups are discussed.
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2006-12-01
To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.
Oprisan, Sorinel A.; Buhusi, Catalin V.
2011-01-01
In most species, the capability of perceiving and using the passage of time in the seconds-to-minutes range (interval timing) is not only accurate but also scalar: errors in time estimation are linearly related to the estimated duration. The ubiquity of scalar timing extends over behavioral, lesion, and pharmacological manipulations. For example, in mammals, dopaminergic drugs induce an immediate, scalar change in the perceived time (clock pattern), whereas cholinergic drugs induce a gradual, scalar change in perceived time (memory pattern). How do these properties emerge from unreliable, noisy neurons firing in the milliseconds range? Neurobiological information relative to the brain circuits involved in interval timing provide support for an striatal beat frequency (SBF) model, in which time is coded by the coincidental activation of striatal spiny neurons by cortical neural oscillators. While biologically plausible, the impracticality of perfect oscillators, or their lack thereof, questions this mechanism in a brain with noisy neurons. We explored the computational mechanisms required for the clock and memory patterns in an SBF model with biophysically realistic and noisy Morris–Lecar neurons (SBF–ML). Under the assumption that dopaminergic drugs modulate the firing frequency of cortical oscillators, and that cholinergic drugs modulate the memory representation of the criterion time, we show that our SBF–ML model can reproduce the pharmacological clock and memory patterns observed in the literature. Numerical results also indicate that parameter variability (noise) – which is ubiquitous in the form of small fluctuations in the intrinsic frequencies of neural oscillators within and between trials, and in the errors in recording/retrieving stored information related to criterion time – seems to be critical for the time-scale invariance of the clock and memory patterns. PMID:21977014
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
NASA Astrophysics Data System (ADS)
Kukunda, Collins B.; Duque-Lazo, Joaquín; González-Ferreiro, Eduardo; Thaden, Hauke; Kleinn, Christoph
2018-03-01
Distinguishing tree species is relevant in many contexts of remote sensing assisted forest inventory. Accurate tree species maps support management and conservation planning, pest and disease control and biomass estimation. This study evaluated the performance of applying ensemble techniques with the goal of automatically distinguishing Pinus sylvestris L. and Pinus uncinata Mill. Ex Mirb within a 1.3 km2 mountainous area in Barcelonnette (France). Three modelling schemes were examined, based on: (1) high-density LiDAR data (160 returns m-2), (2) Worldview-2 multispectral imagery, and (3) Worldview-2 and LiDAR in combination. Variables related to the crown structure and height of individual trees were extracted from the normalized LiDAR point cloud at individual-tree level, after performing individual tree crown (ITC) delineation. Vegetation indices and the Haralick texture indices were derived from Worldview-2 images and served as independent spectral variables. Selection of the best predictor subset was done after a comparison of three variable selection procedures: (1) Random Forests with cross validation (AUCRFcv), (2) Akaike Information Criterion (AIC) and (3) Bayesian Information Criterion (BIC). To classify the species, 9 regression techniques were combined using ensemble models. Predictions were evaluated using cross validation and an independent dataset. Integration of datasets and models improved individual tree species classification (True Skills Statistic, TSS; from 0.67 to 0.81) over individual techniques and maintained strong predictive power (Relative Operating Characteristic, ROC = 0.91). Assemblage of regression models and integration of the datasets provided more reliable species distribution maps and associated tree-scale mapping uncertainties. Our study highlights the potential of model and data assemblage at improving species classifications needed in present-day forest planning and management.
Villadiego, Faider Alberto Castaño; Camilo, Breno Soares; León, Victor Gomez; Peixoto, Thiago; Díaz, Edgar; Okano, Denise; Maitan, Paula; Lima, Daniel; Guimarães, Simone Facioni; Siqueira, Jeanne Broch; Pinho, Rogério
2018-01-01
Nonlinear mixed models were used to describe longitudinal scrotal circumference (SC) measurements of Nellore bulls. Models comparisons were based on Akaike’s information criterion, Bayesian information criterion, error sum of squares, adjusted R2 and percentage of convergence. Sequentially, the best model was used to compare the SC growth curve in bulls divergently classified according to SC at 18–21 months of age. For this, bulls were classified into five groups: SC < 28cm; 28cm ≤ SC < 30cm, 30cm ≤ SC < 32cm, 32cm ≤ SC < 34cm and SC ≥ 34cm. Michaelis-Menten model showed the best fit according to the mentioned criteria. In this model, β1 is the asymptotic SC value and β2 represents the time to half-final growth and may be related to sexual precocity. Parameters of the individual estimated growth curves were used to create a new dataset to evaluate the effect of the classification, farms, and year of birth on β1 and β2 parameters. Bulls of the largest SC group presented a larger predicted SC along all analyzed periods; nevertheless, smaller SC group showed predicted SC similar to intermediate SC groups (28cm ≤ SC < 32cm), around 1200 days of age. In this context, bulls classified as improper for reproduction at 18–21 months old can reach a similar condition to those considered as good condition. In terms of classification at 18–21 months, asymptotic SC was similar among groups, farms and years; however, β2 differed among groups indicating that differences in growth curves are related to sexual precocity. In summary, it seems that selection based on SC at too early ages may lead to discard bulls with suitable reproductive potential. PMID:29494597
Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.
NASA Astrophysics Data System (ADS)
Velichkin, Vladimir A.; Zavyalov, Vladimir A.
2018-03-01
This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.
Boosting for detection of gene-environment interactions.
Pashova, H; LeBlanc, M; Kooperberg, C
2013-01-30
In genetic association studies, it is typically thought that genetic variants and environmental variables jointly will explain more of the inheritance of a phenotype than either of these two components separately. Traditional methods to identify gene-environment interactions typically consider only one measured environmental variable at a time. However, in practice, multiple environmental factors may each be imprecise surrogates for the underlying physiological process that actually interacts with the genetic factors. In this paper, we develop a variant of L(2) boosting that is specifically designed to identify combinations of environmental variables that jointly modify the effect of a gene on a phenotype. Because the effect modifiers might have a small signal compared with the main effects, working in a space that is orthogonal to the main predictors allows us to focus on the interaction space. In a simulation study that investigates some plausible underlying model assumptions, our method outperforms the least absolute shrinkage and selection and Akaike Information Criterion and Bayesian Information Criterion model selection procedures as having the lowest test error. In an example for the Women's Health Initiative-Population Architecture using Genomics and Epidemiology study, the dedicated boosting method was able to pick out two single-nucleotide polymorphisms for which effect modification appears present. The performance was evaluated on an independent test set, and the results are promising. Copyright © 2012 John Wiley & Sons, Ltd.
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
ERIC Educational Resources Information Center
Livingstone, Holly A.; Day, Arla L.
2005-01-01
Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…
Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Investigation of limit state criteria for amorphous metals
NASA Astrophysics Data System (ADS)
Comanici, A. M.; Sandovici, A.; Barsanescu, P. D.
2016-08-01
The name of amorphous metals is assigned to metals that have a non-crystalline structure, but they are also very similar to glass if we look into their properties. A very distinguished feature is the fact that amorphous metals, also known as metallic glasses, show a good electrical conductivity. The extension of the limit state criteria for different materials makes this type of alloy a choice to validate the new criterions. Using a new criterion developed for biaxial and triaxial state of stress, the results are investigated in order to determine the applicability of the mathematical model for these amorphous metals. Especially for brittle materials, it is extremely important to find suitable fracture criterion. Mohr-Coulomb criterion, which is permitting a linear failure envelope, is often used for very brittle materials. But for metallic glasses this criterion is not consistent with the experimental determinations. For metallic glasses, and other high-strength materials, Rui Tao Qu and Zhe Feng Zhang proposed a failure envelope modeling with an ellipse in σ-τ coordinates. In this paper this model is being developed for principal stresses space. It is also proposed a method for transforming σ-τ coordinates in principal stresses coordinates and the theoretical results are consistent with the experimental ones.
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Spatiotemporal coding in the cortex: information flow-based learning in spiking neural networks.
Deco, G; Schürmann, B
1999-05-15
We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.
McBride, Orla; Adamson, Gary; Bunting, Brendan P; McCann, Siobhan
2009-01-01
Research has demonstrated that diagnostic orphans (i.e. individuals who experience only one to two criteria of DSM-IV alcohol dependence) can encounter significant health problems. Using the SF-12v2, this study examined the general health functioning of alcohol users, and in particular, diagnostic orphans. Current drinkers (n = 26,913) in the National Epidemiologic Survey on Alcohol and Related Conditions were categorized into five diagnosis groups: no alcohol use disorder (no-AUD), one-criterion orphans, two-criterion orphans, alcohol abuse and alcohol dependence. Latent variable modelling was used to assess the associations between the physical and mental health factors of the SF-12v2 and the diagnosis groups and a variety of background variables. In terms of mental health, one-criterion orphans had significantly better health than two-criterion orphans and the dependence group, but poorer health than the no-AUD group. No significant differences were evident between the one-criterion orphan group and the alcohol abuse group. One-criterion orphans had significantly poorer physical health when compared to the no-AUD group. One- and two-criterion orphans did not differ in relation to physical health. Consistent with previous research, diagnostic orphans in the current study appear to have experienced clinically relevant symptoms of alcohol dependence. The current findings suggest that diagnostic orphans may form part of an alcohol use disorders spectrum severity.
Varying the valuating function and the presentable bank in computerized adaptive testing.
Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio
2011-05-01
In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tseng, Yi-Ju; Wu, Jung-Hsuan; Ping, Xiao-Ou; Lin, Hui-Chi; Chen, Ying-Yu; Shang, Rung-Ji; Chen, Ming-Yuan; Lai, Feipei
2012-01-01
Background The emergence and spread of multidrug-resistant organisms (MDROs) are causing a global crisis. Combating antimicrobial resistance requires prevention of transmission of resistant organisms and improved use of antimicrobials. Objectives To develop a Web-based information system for automatic integration, analysis, and interpretation of the antimicrobial susceptibility of all clinical isolates that incorporates rule-based classification and cluster analysis of MDROs and implements control chart analysis to facilitate outbreak detection. Methods Electronic microbiological data from a 2200-bed teaching hospital in Taiwan were classified according to predefined criteria of MDROs. The numbers of organisms, patients, and incident patients in each MDRO pattern were presented graphically to describe spatial and time information in a Web-based user interface. Hierarchical clustering with 7 upper control limits (UCL) was used to detect suspicious outbreaks. The system’s performance in outbreak detection was evaluated based on vancomycin-resistant enterococcal outbreaks determined by a hospital-wide prospective active surveillance database compiled by infection control personnel. Results The optimal UCL for MDRO outbreak detection was the upper 90% confidence interval (CI) using germ criterion with clustering (area under ROC curve (AUC) 0.93, 95% CI 0.91 to 0.95), upper 85% CI using patient criterion (AUC 0.87, 95% CI 0.80 to 0.93), and one standard deviation using incident patient criterion (AUC 0.84, 95% CI 0.75 to 0.92). The performance indicators of each UCL were statistically significantly higher with clustering than those without clustering in germ criterion (P < .001), patient criterion (P = .04), and incident patient criterion (P < .001). Conclusion This system automatically identifies MDROs and accurately detects suspicious outbreaks of MDROs based on the antimicrobial susceptibility of all clinical isolates. PMID:23195868
Modelling the graphite fracture mechanisms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacquemoud, C.; Marie, S.; Nedelec, M.
2012-07-01
In order to define a design criterion for graphite components, it is important to identify the physical phenomena responsible for the graphite fracture, to include them in a more effective modelling. In a first step, a large panel of experiments have been realised in order to build up an important database; results of tensile tests, 3 and 4 point bending tests on smooth and notched specimens have been analysed and have demonstrated an important geometry related effects on the behavior up to fracture. Then, first simulations with an elastic or an elastoplastic bilinear constitutive law have not made it possiblemore » to simulate the experimental fracture stress variations with the specimen geometry, the fracture mechanisms of the graphite being at the microstructural scale. That is the reason why a specific F.E. model of the graphite structure has been developed in which every graphite grain has been meshed independently, the crack initiation along the basal plane of the particles as well as the crack propagation and coalescence have been modelled too. This specific model has been used to test two different approaches for fracture initiation: a critical stress criterion and two criteria of fracture mechanic type. They are all based on crystallographic considerations as a global critical stress criterion gave unsatisfactory results. The criteria of fracture mechanic type being extremely unstable and unable to represent the graphite global behaviour up to the final collapse, the critical stress criterion has been preferred to predict the results of the large range of available experiments, on both smooth and notched specimens. In so doing, the experimental observations have been correctly simulated: the geometry related effects on the experimental fracture stress dispersion, the specimen volume effects on the macroscopic fracture stress and the crack propagation at a constant stress intensity factor. In addition, the parameters of the criterion have been related to experimental observations: the local crack initiation stress of 8 MPa corresponds to the non-linearity apparition on the global behavior observed experimentally and the the maximal critical stress defined for the particle of 30 MPa is equivalent to the fracture stress of notched specimens. This innovative combination of crack modelling and a local crystallographic critical stress criterion made it possible to understand that cleavage initiation and propagation in the graphite microstructure was driven by a mean critical stress criterion. (authors)« less
NASA Technical Reports Server (NTRS)
Hale, C.; Valentino, G. J.
1982-01-01
Supervisory decision making and control behavior within a C(3) oriented, ground based weapon system is being studied. The program involves empirical investigation of the sequence of control strategies used during engagement of aircraft targets. An engagement is conceptually divided into several stages which include initial information processing activity, tracking, and ongoing adaptive control decisions. Following a brief description of model parameters, two experiments which served as initial investigation into the accuracy of assumptions regarding the importance of situation assessment in procedure selection are outlined. Preliminary analysis of the results upheld the validity of the assumptions regarding strategic information processing and cue-criterion relationship learning. These results indicate that this model structure should be useful in studies of supervisory decision behavior.
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
Wan, Wai-Yin; Chan, Jennifer S K
2009-08-01
For time series of count data, correlated measurements, clustering as well as excessive zeros occur simultaneously in biomedical applications. Ignoring such effects might contribute to misleading treatment outcomes. A generalized mixture Poisson geometric process (GMPGP) model and a zero-altered mixture Poisson geometric process (ZMPGP) model are developed from the geometric process model, which was originally developed for modelling positive continuous data and was extended to handle count data. These models are motivated by evaluating the trend development of new tumour counts for bladder cancer patients as well as by identifying useful covariates which affect the count level. The models are implemented using Bayesian method with Markov chain Monte Carlo (MCMC) algorithms and are assessed using deviance information criterion (DIC).
Pasekov, V P
2013-03-01
The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumgartner, S.; Bieli, R.; Bergmann, U. C.
2012-07-01
An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an optimized CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This ismore » considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. Emphasis is put on quantifying the statistical distribution of channel bow throughout the core using measurement data. The optimized CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (authors)« less
One-dimensional barcode reading: an information theoretic approach
NASA Astrophysics Data System (ADS)
Houni, Karim; Sawaya, Wadih; Delignon, Yves
2008-03-01
In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.
One-dimensional barcode reading: an information theoretic approach.
Houni, Karim; Sawaya, Wadih; Delignon, Yves
2008-03-10
In the convergence context of identification technology and information-data transmission, the barcode found its place as the simplest and the most pervasive solution for new uses, especially within mobile commerce, bringing youth to this long-lived technology. From a communication theory point of view, a barcode is a singular coding based on a graphical representation of the information to be transmitted. We present an information theoretic approach for 1D image-based barcode reading analysis. With a barcode facing the camera, distortions and acquisition are modeled as a communication channel. The performance of the system is evaluated by means of the average mutual information quantity. On the basis of this theoretical criterion for a reliable transmission, we introduce two new measures: the theoretical depth of field and the theoretical resolution. Simulations illustrate the gain of this approach.
Bauser, G; Hendricks Franssen, Harrie-Jan; Stauffer, Fritz; Kaiser, Hans-Peter; Kuhlmann, U; Kinzelbach, W
2012-08-30
We present the comparison of two control criteria for the real-time management of a water well field. The criteria were used to simulate the operation of the Hardhof well field in the city of Zurich, Switzerland. This well field is threatened by diffuse pollution in the subsurface of the surrounding city area. The risk of attracting pollutants is higher if the pumping rates in four horizontal wells are increased, and can be reduced by increasing artificial recharge in several recharge basins and infiltration wells or by modifying the artificial recharge distribution. A three-dimensional finite elements flow model was built for the Hardhof site. The first control criterion used hydraulic head differences (Δh-criterion) to control the management of the well field and the second criterion used a path line method (%s-criterion) to control the percentage of inflowing water from the city area. Both control methods adapt the allocation of artificial recharge (AR) for given pumping rates in time. The simulation results show that (1) historical management decisions were less effective compared to the optimal control according to the two different criteria and (2) the distribution of artificial recharge calculated with the two control criteria also differ from each other with the %s-criterion giving better results compared to the Δh-criterion. The recharge management with the %s-criterion requires a smaller amount of water to be recharged. The ratio between average artificial recharge and average abstraction is 1.7 for the Δh-criterion and 1.5 for the %s-criterion. Both criteria were tested online. The methodologies were extended to a real-time control method using the Ensemble Kalman Filter method for assimilating 87 online available groundwater head measurements to update the model in real-time. The results of the operational implementation are also satisfying in regard of a reduced risk of well contamination. Copyright © 2012 Elsevier Ltd. All rights reserved.
Failure prediction of thin beryllium sheets used in spacecraft structures
NASA Technical Reports Server (NTRS)
Roschke, Paul N.; Mascorro, Edward; Papados, Photios; Serna, Oscar R.
1991-01-01
The primary objective of this study is to develop a method for prediction of failure of thin beryllium sheets that undergo complex states of stress. Major components of the research include experimental evaluation of strength parameters for cross-rolled beryllium sheet, application of the Tsai-Wu failure criterion to plate bending problems, development of a high order failure criterion, application of the new criterion to a variety of structures, and incorporation of both failure criteria into a finite element code. A Tsai-Wu failure model for SR-200 sheet material is developed from available tensile data, experiments carried out by NASA on two circular plates, and compression and off-axis experiments performed in this study. The failure surface obtained from the resulting criterion forms an ellipsoid. By supplementing experimental data used in the the two-dimensional criterion and modifying previously suggested failure criteria, a multi-dimensional failure surface is proposed for thin beryllium structures. The new criterion for orthotropic material is represented by a failure surface in six-dimensional stress space. In order to determine coefficients of the governing equation, a number of uniaxial, biaxial, and triaxial experiments are required. Details of these experiments and a complementary ultrasonic investigation are described in detail. Finally, validity of the criterion and newly determined mechanical properties is established through experiments on structures composed of SR200 sheet material. These experiments include a plate-plug arrangement under a complex state of stress and a series of plates with an out-of-plane central point load. Both criteria have been incorporated into a general purpose finite element analysis code. Numerical simulation incrementally applied loads to a structural component that is being designed and checks each nodal point in the model for exceedance of a failure criterion. If stresses at all locations do not exceed the failure criterion, the load is increased and the process is repeated. Failure results for the plate-plug and clamped plate tests are accurate to within 2 percent.
Development of dialog system powered by textual educational content
NASA Astrophysics Data System (ADS)
Bisikalo, Oleg V.; Dovgalets, Sergei M.; Pijarski, Paweł; Lisovenko, Anna I.
2016-09-01
The advances in computer technology require an interconnection between a man and computer, more specifically, between complex information systems. The paper is therefore dedicated to creation of dialog systems, able to respond to users depending on the implemented textual educational content. To support the dialog there had been suggested the knowledge base model on the basis of the unit and a fuzzy sense relation. Lexical meanings is taken out from the text by processing the syntactic links between the allologs of all the sentences and the answer shall be generated as the composition of a fuzzy ratios upon the formal criterion. The information support technology had been put to an evaluation sample test, which demonstrates the combination of information from some sentences in the final response.
Splett, Joni W; Smith-Millman, Marissa; Raborn, Anthony; Brann, Kristy L; Flaspohler, Paul D; Maras, Melissa A
2018-03-08
The current study examined between-teacher variance in teacher ratings of student behavioral and emotional risk to identify student, teacher and classroom characteristics that predict such differences and can be considered in future research and practice. Data were taken from seven elementary schools in one school district implementing universal screening, including 1,241 students rated by 68 teachers. Students were mostly African America (68.5%) with equal gender (female 50.1%) and grade-level distributions. Teachers, mostly White (76.5%) and female (89.7%), completed both a background survey regarding their professional experiences and demographic characteristics and the Behavior Assessment System for Children (Second Edition) Behavioral and Emotional Screening System-Teacher Form for all students in their class, rating an average of 17.69 students each. Extant student data were provided by the district. Analyses followed multilevel linear model stepwise model-building procedures. We detected a significant amount of variance in teachers' ratings of students' behavioral and emotional risk at both student and teacher/classroom levels with student predictors explaining about 39% of student-level variance and teacher/classroom predictors explaining about 20% of between-teacher differences. The final model fit the data (Akaike information criterion = 8,687.709; pseudo-R2 = 0.544) significantly better than the null model (Akaike information criterion = 9,457.160). Significant predictors included student gender, race ethnicity, academic performance and disciplinary incidents, teacher gender, student-teacher gender interaction, teacher professional development in behavior screening, and classroom academic performance. Future research and practice should interpret teacher-rated universal screening of students' behavioral and emotional risk with consideration of the between-teacher variance unrelated to student behavior detected. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Dolejsi, Erich; Bodenstorfer, Bernhard; Frommlet, Florian
2014-01-01
The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2) which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and d-GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, where specifically for more complex models MOSGWA is more powerful with only a slight increase in Type I error. On the other hand according to our simulations GWASelect does not at all control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium and compare the findings of the different procedures, where MOSGWA detects for complex diseases a number of interesting SNPs which are not found by other methods. PMID:25061809
A complete graphical criterion for the adjustment formula in mediation analysis.
Shpitser, Ilya; VanderWeele, Tyler J
2011-03-04
Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Han, Sanghoon; Dobbins, Ian G.
2009-01-01
Recognition models often assume that subjects use specific evidence values (decision criteria) to adaptively parse continuous memory evidence into response categories (e.g., “old” or “new”). Although explicit pre-test instructions influence criterion placement, these criteria appear extremely resistant to change once testing begins. We tested criterion sensitivity to local feedback using a novel, biased feedback technique designed to tacitly encourage certain errors by indicating they were correct choices. Experiment 1 demonstrated that fully correct feedback had little effect on criterion placement, whereas biased feedback during Experiments 2 and 3 yielded prominent, durable, and adaptive criterion shifts, with observers reporting they were unaware of the manipulation in Experiment 3. These data suggest recognition criteria can be easily modified during testing through a form of feedback learning that operates independent of stimulus characteristics and observer awareness of the nature of the manipulation. This mechanism may be fundamentally different than criterion shifts following explicit instructions and warnings, or shifts linked to manipulations of stimulus characteristics combined with feedback highlighting those manipulations. PMID:18604954
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Assessment of corneal properties based on statistical modeling of OCT speckle.
Jesus, Danilo A; Iskander, D Robert
2017-01-01
A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike's Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This document furnishes a complete copy of the Test Subject's Instructions and the Test Administrator's Handbook for a battery of criterion referenced Job Task Performance Tests (JTPT) for electronic maintenance. General information is provided on soldering, Radar Set AN/APN-147(v), Radar Set Special Equipment, Radar Set Bench Test Set-Up, and…
Bayes' Theorem: An Old Tool Applicable to Today's Classroom Measurement Needs. ERIC/AE Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…
Vehicle lift-off modelling and a new rollover detection criterion
NASA Astrophysics Data System (ADS)
Mashadi, Behrooz; Mostaghimi, Hamid
2017-05-01
The modelling and development of a general criterion for the prediction of rollover threshold is the main purpose of this work. Vehicle dynamics models after the wheels lift-off and when the vehicle moves on the two wheels are derived and the governing equations are used to develop the rollover threshold. These models include the properties of the suspension and steering systems. In order to study the stability of motion, the steady-state solutions of the equations of motion are carried out. Based on the stability analyses, a new relation is obtained for the rollover threshold in terms of measurable response parameters. The presented criterion predicts the best time for the prevention of the vehicle rollover by applying a correcting moment. It is shown that the introduced threshold of vehicle rollover is a proper state of vehicle motion that is best for stabilising the vehicle with a low energy requirement.
Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters
NASA Astrophysics Data System (ADS)
Selyutina, N. S.; Petrov, Yu. V.
2018-02-01
The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.
Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A
2016-03-15
We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.
Thoughts on Information Literacy and the 21st Century Workplace.
ERIC Educational Resources Information Center
Beam, Walter R.
2001-01-01
Discusses changes in society that have led to literacy skills being a criterion for employment. Topics include reading; communication skills; writing; cognitive processes; math; computers, the Internet, and the information revolution; information needs and access; information cross-linking; information literacy; and hardware and software use. (LRW)
Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini
2016-01-01
The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike’s information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (−2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values. PMID:26954176
Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini
2016-06-01
The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike's information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (-2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values.
A comparison of the Injury Severity Score and the Trauma Mortality Prediction Model.
Cook, Alan; Weddle, Jo; Baker, Susan; Hosmer, David; Glance, Laurent; Friedman, Lee; Osler, Turner
2014-01-01
Performance benchmarking requires accurate measurement of injury severity. Despite its shortcomings, the Injury Severity Score (ISS) remains the industry standard 40 years after its creation. A new severity measure, the Trauma Mortality Prediction Model (TMPM), uses either the Abbreviated Injury Scale (AIS) or DRG International Classification of Diseases-9th Rev. (ICD-9) lexicons and may better quantify injury severity compared with ISS. We compared the performance of TMPM with ISS and other measures of injury severity in a single cohort of patients. We included 337,359 patient records with injuries reliably described in both the AIS and the ICD-9 lexicons from the National Trauma Data Bank. Five injury severity measures (ISS, maximum AIS score, New Injury Severity Score [NISS], ICD-9-Based Injury Severity Score [ICISS], TMPM) were computed using either the AIS or ICD-9 codes. These measures were compared for discrimination (area under the receiver operating characteristic curve), an estimate of proximity to a model that perfectly predicts the outcome (Akaike information criterion), and model calibration curves. TMPM demonstrated superior receiver operating characteristic curve, Akaike information criterion, and calibration using either the AIS or ICD-9 lexicons. Calibration plots demonstrate the monotonic characteristics of the TMPM models contrasted by the nonmonotonic features of the other prediction models. Severity measures were more accurate with the AIS lexicon rather than ICD-9. NISS proved superior to ISS in either lexicon. Since NISS is simpler to compute, it should replace ISS when a quick estimate of injury severity is required for AIS-coded injuries. Calibration curves suggest that the nonmonotonic nature of ISS may undermine its performance. TMPM demonstrated superior overall mortality prediction compared with all other models including ISS whether the AIS or ICD-9 lexicons were used. Because TMPM provides an absolute probability of death, it may allow clinicians to communicate more precisely with one another and with patients and families. Disagnostic study, level I; prognostic study, level II.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Remontet, L; Bossard, N; Belot, A; Estève, J
2007-05-10
Relative survival provides a measure of the proportion of patients dying from the disease under study without requiring the knowledge of the cause of death. We propose an overall strategy based on regression models to estimate the relative survival and model the effects of potential prognostic factors. The baseline hazard was modelled until 10 years follow-up using parametric continuous functions. Six models including cubic regression splines were considered and the Akaike Information Criterion was used to select the final model. This approach yielded smooth and reliable estimates of mortality hazard and allowed us to deal with sparse data taking into account all the available information. Splines were also used to model simultaneously non-linear effects of continuous covariates and time-dependent hazard ratios. This led to a graphical representation of the hazard ratio that can be useful for clinical interpretation. Estimates of these models were obtained by likelihood maximization. We showed that these estimates could be also obtained using standard algorithms for Poisson regression. Copyright 2006 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakhshandeh, Mohsen; Hashemi, Bijan, E-mail: bhashemi@modares.ac.ir; Mahdavi, Seied Rabi Mehdi
Purpose: To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Methods and Materials: Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-basedmore » treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with {alpha}/{beta} = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Results: Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D{sub 50} estimated from the models was approximately 44 Gy. Conclusions: The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication.« less
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
[Precarious employment in undocumented immigrants in Spain and its relationship with health].
Porthé, Victoria; Benavides, Fernando G; Vázquez, M Luisa; Ruiz-Frutos, Carlos; García, Ana M; Ahonen, Emily; Agudelo-Suárez, Andrés A; Benach, Joan
2009-12-01
To describe the characteristics of precarious employment in undocumented immigrants in Spain and its relationship with health. A qualitative study was conducted using analytic induction. Criterion sampling, based on the Immigration, Work and Health project (Inmigración, Trabajo y Salud [ITSAL]) criterion (current definitions of 'legal immigrant' in Spain and in the literature) was used to recruit 44 undocumented immigrant workers from four different countries, living in four Spanish cities. The characteristics of precariousness perceived by undocumented immigrants included high job instability; disempowerment due to lack of legal protection; high vulnerability exacerbated by their legal and immigrant status; perceived insufficient wages and lower wages than coworkers; limited social benefits and difficulty in exercising their rights; and finally, long hours and fast-paced work. Our informants reported they had no serious health problems but did describe physical and mental problems associated with their employment conditions and legal situation. Our results suggest that undocumented immigrants' situation may not fit the model of precarious employment exactly. However, the model's dimensions can be expanded to better represent undocumented immigrants' situation, thus strengthening the general model. Precarious employment in this group can be defined as
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
ERIC Educational Resources Information Center
Phemister, Art W.
2010-01-01
The purpose of this study was to evaluate the effectiveness of the Georgia's Choice reading curriculum on third grade science scores on the Georgia Criterion Referenced Competency Test from 2002 to 2008. In assessing the effectiveness of the Georgia's Choice curriculum model this causal comparative study examined the 105 elementary schools that…
Bivariate copula in fitting rainfall data
NASA Astrophysics Data System (ADS)
Yee, Kong Ching; Suhaila, Jamaludin; Yusof, Fadhilah; Mean, Foo Hui
2014-07-01
The usage of copula to determine the joint distribution between two variables is widely used in various areas. The joint distribution of rainfall characteristic obtained using the copula model is more ideal than the standard bivariate modelling where copula is belief to have overcome some limitation. Six copula models will be applied to obtain the most suitable bivariate distribution between two rain gauge stations. The copula models are Ali-Mikhail-Haq (AMH), Clayton, Frank, Galambos, Gumbel-Hoogaurd (GH) and Plackett. The rainfall data used in the study is selected from rain gauge stations which are located in the southern part of Peninsular Malaysia, during the period from 1980 to 2011. The goodness-of-fit test in this study is based on the Akaike information criterion (AIC).
González-Moreno, A; Bordera, S; Leirana-Alcocer, J; Delfín-González, H
2012-06-01
The biology and behavior of insects are strongly influenced by environmental conditions such as temperature and precipitation. Because some of these factors present a within day variation, they may be causing variations on insect diurnal flight activity, but scant information exists on the issue. The aim of this work was to describe the patterns on diurnal variation of the abundance of Ichneumonoidea and their relation with relative humidity, temperature, light intensity, and wind speed. The study site was a tropical dry forest at Ría Lagartos Biosphere Reserve, Mexico; where correlations between environmental factors (relative humidity, temperature, light, and wind speed) and abundance of Ichneumonidae and Braconidae (Hymenoptera: Ichneumonoidea) were estimated. The best regression model for explaining abundance variation was selected using the second order Akaike Information Criterion. The optimum values of temperature, humidity, and light for flight activity of both families were also estimated. Ichneumonid and braconid abundances were significantly correlated to relative humidity, temperature, and light intensity; ichneumonid also showed significant correlations to wind speed. The second order Akaike Information Criterion suggests that in tropical dry conditions, relative humidity is more important that temperature for Ichneumonoidea diurnal activity. Ichneumonid wasps selected toward intermediate values of relative humidity, temperature and the lowest wind speeds; while Braconidae selected for low values of relative humidity. For light intensity, braconids presented a positive selection for moderately high values.
Modeling of cw OIL energy performance based on similarity criteria
NASA Astrophysics Data System (ADS)
Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.
2012-01-01
A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
de Geus, Eveline; Aalfs, Cora M; Menko, Fred H; Sijmons, Rolf H; Verdam, Mathilde G E; de Haes, Hanneke C J M; Smets, Ellen M A
2015-08-01
Despite the use of genetic services, counselees do not always share hereditary cancer information with at-risk relatives. Reasons for not informing relatives may be categorized as a lack of: knowledge, motivation, and/or self-efficacy. This study aims to develop and test the psychometric properties of the Informing Relatives Inventory, a battery of instruments that intend to measure counselees' knowledge, motivation, and self-efficacy regarding the disclosure of hereditary cancer risk information to at-risk relatives. Guided by the proposed conceptual framework, existing instruments were selected and new instruments were developed. We tested the instruments' acceptability, dimensionality, reliability, and criterion-related validity in consecutive index patients visiting the Clinical Genetics department with questions regarding hereditary breast and/or ovarian cancer or colon cancer. Data of 211 index patients were included (response rate = 62%). The Informing Relatives Inventory (IRI) assesses three barriers in disclosure representing seven domains. Instruments assessing index patients' (positive) motivation and self-efficacy were acceptable and reliable and suggested good criterion-related validity. Psychometric properties of instruments assessing index patients knowledge were disputable. These items were moderately accepted by index patients and the criterion-related validity was weaker. This study presents a first conceptual framework and associated inventory (IRI) that improves insight into index patients' barriers regarding the disclosure of genetic cancer information to at-risk relatives. Instruments assessing (positive) motivation and self-efficacy proved to be reliable measurements. Measuring index patients knowledge appeared to be more challenging. Further research is necessary to ensure IRI's dimensionality and sensitivity to change.
An empirical model of human aspiration in low-velocity air using CFD investigations.
Anthony, T Renée; Anderson, Kimberly R
2015-01-01
Computational fluid dynamics (CFD) modeling was performed to investigate the aspiration efficiency of the human head in low velocities to examine whether the current inhaled particulate mass (IPM) sampling criterion matches the aspiration efficiency of an inhaling human in airflows common to worker exposures. Data from both mouth and nose inhalation, averaged to assess omnidirectional aspiration efficiencies, were compiled and used to generate a unifying model to relate particle size to aspiration efficiency of the human head. Multiple linear regression was used to generate an empirical model to estimate human aspiration efficiency and included particle size as well as breathing and freestream velocities as dependent variables. A new set of simulated mouth and nose breathing aspiration efficiencies was generated and used to test the fit of empirical models. Further, empirical relationships between test conditions and CFD estimates of aspiration were compared to experimental data from mannequin studies, including both calm-air and ultra-low velocity experiments. While a linear relationship between particle size and aspiration is reported in calm air studies, the CFD simulations identified a more reasonable fit using the square of particle aerodynamic diameter, which better addressed the shape of the efficiency curve's decline toward zero for large particles. The ultimate goal of this work was to develop an empirical model that incorporates real-world variations in critical factors associated with particle aspiration to inform low-velocity modifications to the inhalable particle sampling criterion.
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Tefferi, Ayalew; Gangat, Naseema; Mudireddy, Mythri; Lasho, Terra L; Finke, Christy; Begna, Kebede H; Elliott, Michelle A; Al-Kali, Aref; Litzow, Mark R; Hook, C Christopher; Wolanskyj, Alexandra P; Hogan, William J; Patnaik, Mrinal M; Pardanani, Animesh; Zblewski, Darci L; He, Rong; Viswanatha, David; Hanson, Curtis A; Ketterling, Rhett P; Tang, Jih-Luh; Chou, Wen-Chien; Lin, Chien-Chin; Tsai, Cheng-Hong; Tien, Hwei-Fang; Hou, Hsin-An
2018-06-01
To develop a new risk model for primary myelodysplastic syndromes (MDS) that integrates information on mutations, karyotype, and clinical variables. Patients with World Health Organization-defined primary MDS seen at Mayo Clinic (MC) from December 28, 1994, through December 19, 2017, constituted the core study group. The National Taiwan University Hospital (NTUH) provided the validation cohort. Model performance, compared with the revised International Prognostic Scoring System, was assessed by Akaike information criterion and area under the curve estimates. The study group consisted of 685 molecularly annotated patients from MC (357) and NTUH (328). Multivariate analysis of the MC cohort identified monosomal karyotype (hazard ratio [HR], 5.2; 95% CI, 3.1-8.6), "non-MK abnormalities other than single/double del(5q)" (HR, 1.8; 95% CI, 1.3-2.6), RUNX1 (HR, 2.0; 95% CI, 1.2-3.1) and ASXL1 (HR, 1.7; 95% CI, 1.2-2.3) mutations, absence of SF3B1 mutations (HR, 1.6; 95% CI, 1.1-2.4), age greater than 70 years (HR, 2.2; 95% CI, 1.6-3.1), hemoglobin level less than 8 g/dL in women or less than 9 g/dL in men (HR, 2.3; 95% CI, 1.7-3.1), platelet count less than 75 × 10 9 /L (HR, 1.5; 95% CI, 1.1-2.1), and 10% or more bone marrow blasts (HR, 1.7; 95% CI, 1.1-2.8) as predictors of inferior overall survival. Based on HR-weighted risk scores, a 4-tiered Mayo alliance prognostic model for MDS was devised: low (89 patients), intermediate-1 (104), intermediate-2 (95), and high (69); respective median survivals (5-year overall survival rates) were 85 (73%), 42 (34%), 22 (7%), and 9 months (0%). The Mayo alliance model was subsequently validated by using the external NTUH cohort and, compared with the revised International Prognostic Scoring System, displayed favorable Akaike information criterion (1865 vs 1943) and area under the curve (0.87 vs 0.76) values. We propose a simple and contemporary risk model for MDS that is based on a limited set of genetic and clinical variables. Copyright © 2018. Published by Elsevier Inc.
Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C
2017-01-01
Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Modeling of weak blast wave propagation in the lung.
D'yachenko, A I; Manyuhina, O V
2006-01-01
Blast injuries of the lung are the most life-threatening after an explosion. The choice of physical parameters responsible for trauma is important to understand its mechanism. We developed a one-dimensional linear model of an elastic wave propagation in foam-like pulmonary parenchyma to identify the possible cause of edema due to the impact load. The model demonstrates different injury localizations for free and rigid boundary conditions. The following parameters were considered: strain, velocity, pressure in the medium and stresses in structural elements, energy dissipation, parameter of viscous criterion. Maximum underpressure is the most suitable wave parameter to be the criterion for edema formation in a rabbit lung. We supposed that observed scattering of experimental data on edema severity is induced by the physiological variety of rabbit lungs. The criterion and the model explain this scattering. The model outlines the demands for experimental data to make an unambiguous choice of physical parameters responsible for lung trauma due to impact load.
NASA Astrophysics Data System (ADS)
Tariq, Imran; Humbert-Vidan, Laia; Chen, Tao; South, Christopher P.; Ezhil, Veni; Kirkby, Norman F.; Jena, Rajesh; Nisbet, Andrew
2015-05-01
This paper reports a modelling study of tumour volume dynamics in response to stereotactic ablative radiotherapy (SABR). The main objective was to develop a model that is adequate to describe tumour volume change measured during SABR, and at the same time is not excessively complex as lacking support from clinical data. To this end, various modelling options were explored, and a rigorous statistical method, the Akaike information criterion, was used to help determine a trade-off between model accuracy and complexity. The models were calibrated to the data from 11 non-small cell lung cancer patients treated with SABR. The results showed that it is feasible to model the tumour volume dynamics during SABR, opening up the potential for using such models in a clinical environment in the future.
Observational constraints on cosmological models with Chaplygin gas and quadratic equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharov, G.S., E-mail: german.sharov@mail.ru
Observational manifestations of accelerated expansion of the universe, in particular, recent data for Type Ia supernovae, baryon acoustic oscillations, for the Hubble parameter H ( z ) and cosmic microwave background constraints are described with different cosmological models. We compare the ΛCDM, the models with generalized and modified Chaplygin gas and the model with quadratic equation of state. For these models we estimate optimal model parameters and their permissible errors with different approaches to calculation of sound horizon scale r {sub s} ( z {sub d} ). Among the considered models the best value of χ{sup 2} is achieved formore » the model with quadratic equation of state, but it has 2 additional parameters in comparison with the ΛCDM and therefore is not favored by the Akaike information criterion.« less
A multiloop generalization of the circle criterion for stability margin analysis
NASA Technical Reports Server (NTRS)
Safonov, M. G.; Athans, M.
1979-01-01
In order to provide a theoretical tool suited for characterizing the stability margins of multiloop feedback systems, multiloop input-output stability results generalizing the circle stability criterion are considered. Generalized conic sectors with 'centers' and 'radii' determined by linear dynamical operators are employed to specify the stability margins as a frequency dependent convex set of modeling errors (including nonlinearities, gain variations and phase variations) which the system must be able to tolerate in each feedback loop without instability. The resulting stability criterion gives sufficient conditions for closed loop stability in the presence of frequency dependent modeling errors, even when the modeling errors occur simultaneously in all loops. The stability conditions yield an easily interpreted scalar measure of the amount by which a multiloop system exceeds, or falls short of, its stability margin specifications.
Lee, Donghyun; Lee, Hojun
2016-01-01
Background Internet search query data reflect the attitudes of the users, using which we can measure the past orientation to commit suicide. Examinations of past orientation often highlight certain predispositions of attitude, many of which can be suicide risk factors. Objective To investigate the relationship between past orientation and suicide rate by examining Google search queries. Methods We measured the past orientation using Google search query data by comparing the search volumes of the past year and those of the future year, across the 50 US states and the District of Columbia during the period from 2004 to 2012. We constructed a panel dataset with independent variables as control variables; we then undertook an analysis using multiple ordinary least squares regression and methods that leverage the Akaike information criterion and the Bayesian information criterion. Results It was found that past orientation had a positive relationship with the suicide rate (P≤.001) and that it improves the goodness-of-fit of the model regarding the suicide rate. Unemployment rate (P≤.001 in Models 3 and 4), Gini coefficient (P≤.001), and population growth rate (P≤.001) had a positive relationship with the suicide rate, whereas the gross state product (P≤.001) showed a negative relationship with the suicide rate. Conclusions We empirically identified the positive relationship between the suicide rate and past orientation, which was measured by big data-driven Google search query. PMID:26868917
Lee, Donghyun; Lee, Hojun; Choi, Munkee
2016-02-11
Internet search query data reflect the attitudes of the users, using which we can measure the past orientation to commit suicide. Examinations of past orientation often highlight certain predispositions of attitude, many of which can be suicide risk factors. To investigate the relationship between past orientation and suicide rate by examining Google search queries. We measured the past orientation using Google search query data by comparing the search volumes of the past year and those of the future year, across the 50 US states and the District of Columbia during the period from 2004 to 2012. We constructed a panel dataset with independent variables as control variables; we then undertook an analysis using multiple ordinary least squares regression and methods that leverage the Akaike information criterion and the Bayesian information criterion. It was found that past orientation had a positive relationship with the suicide rate (P ≤ .001) and that it improves the goodness-of-fit of the model regarding the suicide rate. Unemployment rate (P ≤ .001 in Models 3 and 4), Gini coefficient (P ≤ .001), and population growth rate (P ≤ .001) had a positive relationship with the suicide rate, whereas the gross state product (P ≤ .001) showed a negative relationship with the suicide rate. We empirically identified the positive relationship between the suicide rate and past orientation, which was measured by big data-driven Google search query.
Reum, J C P
2011-12-01
Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
Laurenson, Yan C S M; Kyriazakis, Ilias; Bishop, Stephen C
2013-10-18
Estimated breeding values (EBV) for faecal egg count (FEC) and genetic markers for host resistance to nematodes may be used to identify resistant animals for selective breeding programmes. Similarly, targeted selective treatment (TST) requires the ability to identify the animals that will benefit most from anthelmintic treatment. A mathematical model was used to combine the concepts and evaluate the potential of using genetic-based methods to identify animals for a TST regime. EBVs obtained by genomic prediction were predicted to be the best determinant criterion for TST in terms of the impact on average empty body weight and average FEC, whereas pedigree-based EBVs for FEC were predicted to be marginally worse than using phenotypic FEC as a determinant criterion. Whilst each method has financial implications, if the identification of host resistance is incorporated into a wider genomic selection indices or selective breeding programmes, then genetic or genomic information may be plausibly included in TST regimes. Copyright © 2013 Elsevier B.V. All rights reserved.
Huang, Lihan; Hwang, Andy; Phillips, John
2011-10-01
The objective of this work is to develop a mathematical model for evaluating the effect of temperature on the rate of microbial growth. The new mathematical model is derived by combination and modification of the Arrhenius equation and the Eyring-Polanyi transition theory. The new model, suitable for both suboptimal and the entire growth temperature ranges, was validated using a collection of 23 selected temperature-growth rate curves belonging to 5 groups of microorganisms, including Pseudomonas spp., Listeria monocytogenes, Salmonella spp., Clostridium perfringens, and Escherichia coli, from the published literature. The curve fitting is accomplished by nonlinear regression using the Levenberg-Marquardt algorithm. The resulting estimated growth rate (μ) values are highly correlated to the data collected from the literature (R(2) = 0.985, slope = 1.0, intercept = 0.0). The bias factor (B(f) ) of the new model is very close to 1.0, while the accuracy factor (A(f) ) ranges from 1.0 to 1.22 for most data sets. The new model is compared favorably with the Ratkowsky square root model and the Eyring equation. Even with more parameters, the Akaike information criterion, Bayesian information criterion, and mean square errors of the new model are not statistically different from the square root model and the Eyring equation, suggesting that the model can be used to describe the inherent relationship between temperature and microbial growth rates. The results of this work show that the new growth rate model is suitable for describing the effect of temperature on microbial growth rate. Practical Application: Temperature is one of the most significant factors affecting the growth of microorganisms in foods. This study attempts to develop and validate a mathematical model to describe the temperature dependence of microbial growth rate. The findings show that the new model is accurate and can be used to describe the effect of temperature on microbial growth rate in foods. Journal of Food Science © 2011 Institute of Food Technologists® No claim to original US government works.
A new tracer‐density criterion for heterogeneous porous media
Barth, Gilbert R.; Illangasekare, Tissa H.; Hill, Mary C.; Rajaram, Harihar
2001-01-01
Tracer experiments provide information about aquifer material properties vital for accurate site characterization. Unfortunately, density‐induced sinking can distort tracer movement, leading to an inaccurate assessment of material properties. Yet existing criteria for selecting appropriate tracer concentrations are based on analysis of homogeneous media instead of media with heterogeneities typical of field sites. This work introduces a hydraulic‐gradient correction for heterogeneous media and applies it to a criterion previously used to indicate density‐induced instabilities in homogeneous media. The modified criterion was tested using a series of two‐dimensional heterogeneous intermediate‐scale tracer experiments and data from several detailed field tracer tests. The intermediate‐scale experimental facility (10.0×1.2×0.06 m) included both homogeneous and heterogeneous (σln k2 = 1.22) zones. The field tracer tests were less heterogeneous (0.24 < σln k2 < 0.37), but measurements were sufficient to detect density‐induced sinking. Evaluation of the modified criterion using the experiments and field tests demonstrates that the new criterion appears to account for the change in density‐induced sinking due to heterogeneity. The criterion demonstrates the importance of accounting for heterogeneity to predict density‐induced sinking and differences in the onset of density‐induced sinking in two‐ and three‐dimensional systems.
Evaluation of volatile organic emissions from hazardous waste incinerators.
Sedman, R M; Esparza, J R
1991-01-01
Conventional methods of risk assessment typically employed to evaluate the impact of hazardous waste incinerators on public health must rely on somewhat speculative emissions estimates or on complicated and expensive sampling and analytical methods. The limited amount of toxicological information concerning many of the compounds detected in stack emissions also complicates the evaluation of the public health impacts of these facilities. An alternative approach aimed at evaluating the public health impacts associated with volatile organic stack emissions is presented that relies on a screening criterion to evaluate total stack hydrocarbon emissions. If the concentration of hydrocarbons in ambient air is below the screening criterion, volatile emissions from the incinerator are judged not to pose a significant threat to public health. Both the screening criterion and a conventional method of risk assessment were employed to evaluate the emissions from 20 incinerators. Use of the screening criterion always yielded a substantially greater estimate of risk than that derived by the conventional method. Since the use of the screening criterion always yielded estimates of risk that were greater than that determined by conventional methods and measuring total hydrocarbon emissions is a relatively simple analytical procedure, the use of the screening criterion would appear to facilitate the evaluation of operating hazardous waste incinerators. PMID:1954928
Contribution of criterion A2 to PTSD screening in the presence of traumatic events.
Pereda, Noemí; Forero, Carlos G
2012-10-01
Criterion A2 according to the Diagnostic and Statistical Manual of Mental Disorders (4(th) ed.; DSM-IV; American Psychiatric Association [APA], 1994) for posttraumatic stress disorder (PTSD) aims to assess the individual's subjective appraisal of an event, but it has been claimed that it might not be sufficiently specific for diagnostic purposes. We analyse the contribution of Criterion A2 and DSM-IV criteria to detect PTSD for the most distressing life events experienced by our subjects. Young adults (N = 1,033) reported their most distressing life events, together with PTSD criteria (Criteria A2, B, C, D, E, and F). PTSD prevalence and criterion specificity and agreement with probable diagnoses were estimated. Our results indicate 80.30% of the individuals experienced traumatic events and met one or more PTSD criteria; 13.22% cases received a positive diagnosis of PTSD. Criterion A2 showed poor agreement with the final probable PTSD diagnosis (correlation with PTSD .13, specificity = .10); excluding it from PTSD diagnosis did not the change the estimated disorder prevalence significantly. Based on these findings it appears that Criterion A2 is scarcely specific and provides little information to confirm a probable PTSD case. Copyright © 2012 International Society for Traumatic Stress Studies.
Does the choice of nucleotide substitution models matter topologically?
Hoff, Michael; Orf, Stefan; Riehm, Benedikt; Darriba, Diego; Stamatakis, Alexandros
2016-03-24
In the context of a master level programming practical at the computer science department of the Karlsruhe Institute of Technology, we developed and make available an open-source code for testing all 203 possible nucleotide substitution models in the Maximum Likelihood (ML) setting under the common Akaike, corrected Akaike, and Bayesian information criteria. We address the question if model selection matters topologically, that is, if conducting ML inferences under the optimal, instead of a standard General Time Reversible model, yields different tree topologies. We also assess, to which degree models selected and trees inferred under the three standard criteria (AIC, AICc, BIC) differ. Finally, we assess if the definition of the sample size (#sites versus #sites × #taxa) yields different models and, as a consequence, different tree topologies. We find that, all three factors (by order of impact: nucleotide model selection, information criterion used, sample size definition) can yield topologically substantially different final tree topologies (topological difference exceeding 10 %) for approximately 5 % of the tree inferences conducted on the 39 empirical datasets used in our study. We find that, using the best-fit nucleotide substitution model may change the final ML tree topology compared to an inference under a default GTR model. The effect is less pronounced when comparing distinct information criteria. Nonetheless, in some cases we did obtain substantial topological differences.
Segmentation and clustering as complementary sources of information
NASA Astrophysics Data System (ADS)
Dale, Michael B.; Allison, Lloyd; Dale, Patricia E. R.
2007-03-01
This paper examines the effects of using a segmentation method to identify change-points or edges in vegetation. It identifies coherence (spatial or temporal) in place of unconstrained clustering. The segmentation method involves change-point detection along a sequence of observations so that each cluster formed is composed of adjacent samples; this is a form of constrained clustering. The protocol identifies one or more models, one for each section identified, and the quality of each is assessed using a minimum message length criterion, which provides a rational basis for selecting an appropriate model. Although the segmentation is less efficient than clustering, it does provide other information because it incorporates textural similarity as well as homogeneity. In addition it can be useful in determining various scales of variation that may apply to the data, providing a general method of small-scale pattern analysis.
Vortex Advisory System Safety Analysis : Volume 1. Analytical Model
DOT National Transportation Integrated Search
1978-09-01
The Vortex Advisory System (VAS) is based on wind criterion--when the wind near the runway end is outside of the criterion, all interarrival Instrument Flight Rules (IFR) aircraft separations can be set at 3 nautical miles. Five years of wind data ha...
Inviscid criterion for decomposing scales
NASA Astrophysics Data System (ADS)
Zhao, Dongxiao; Aluie, Hussein
2018-05-01
The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.
NASA Astrophysics Data System (ADS)
Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit
2016-03-01
Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.
Complex networks untangle competitive advantage in Australian football
NASA Astrophysics Data System (ADS)
Braham, Calum; Small, Michael
2018-05-01
We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.
Complex networks untangle competitive advantage in Australian football.
Braham, Calum; Small, Michael
2018-05-01
We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.
Modeling of direct wafer bonding: Effect of wafer bow and etch patterns
NASA Astrophysics Data System (ADS)
Turner, K. T.; Spearing, S. M.
2002-12-01
Direct wafer bonding is an important technology for the manufacture of silicon-on-insulator substrates and microelectromechanical systems. As devices become more complex and require the bonding of multiple patterned wafers, there is a need to understand the mechanics of the bonding process. A general bonding criterion based on the competition between the strain energy accumulated in the wafers and the surface energy that is dissipated as the bond front advances is developed. The bonding criterion is used to examine the case of bonding bowed wafers. An analytical expression for the strain energy accumulation rate, which is the quantity that controls bonding, and the final curvature of a bonded stack is developed. It is demonstrated that the thickness of the wafers plays a large role and bonding success is independent of wafer diameter. The analytical results are verified through a finite element model and a general method for implementing the bonding criterion numerically is presented. The bonding criterion developed permits the effect of etched features to be assessed. Shallow etched patterns are shown to make bonding more difficult, while it is demonstrated that deep etched features can facilitate bonding. Model results and their process design implications are discussed in detail.
Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches
NASA Astrophysics Data System (ADS)
Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.
2018-02-01
This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.
Platzer, Christine; Bröder, Arndt; Heck, Daniel W
2014-05-01
Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.
NASA Astrophysics Data System (ADS)
Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2004-06-01
Two constitutive models used to describe the plastic behavior of sheet metals in the numerical simulation of sheet metal forming process are studied: a recently proposed advanced constitutive model based on the Teodosiu microstructural model and the Cazacu Barlat yield criterion is compared with a more classical one, based on the Swift law and the Hill 1948 yield criterion. These constitutive models are implemented into DD3IMP, a finite element home code specifically developed to simulate sheet metal forming processes, which generically is a 3-D elastoplastic finite element code with an updated Lagrangian formulation, following a fully implicit time integration scheme, large elastoplastic strains and rotations. Solid finite elements and parametric surfaces are used to model the blank sheet and tool surfaces, respectively. Some details of the numerical implementation of the constitutive models are given. Finally, the theory is illustrated with the numerical simulation of the deep drawing of a cylindrical cup. The results show that the proposed advanced constitutive model predicts with more exactness the final shape (medium height and ears profile) of the formed part, as one can conclude from the comparison with the experimental results.
Information presentation format moderates the unconscious-thought effect: The role of recollection.
Abadie, Marlène; Waroquier, Laurent; Terrier, Patrice
2016-09-01
The unconscious-thought effect occurs when distraction improves complex decision-making. In two experiments using the unconscious-thought paradigm, we investigated the effect of presentation format of decision information (i) on memory for decision-relevant information and (ii) on the quality of decisions made after distraction, conscious deliberation or immediately. We used the process-dissociation procedure to measure recollection and familiarity. The two studies showed that presenting information blocked per criterion led participants to recollect more decision-relevant details compared to a presentation by option. Moreover, a Bayesian meta-analysis of the two studies provided strong evidence that conscious deliberation resulted in better decisions when the information was presented blocked per criterion and substantial evidence that distraction improved decision quality when the information was presented blocked per option. Finally, Study 2 revealed that the recollection of decision-relevant details mediated the effect of presentation format on decision quality in the deliberation condition. This suggests that recollection contributes to conscious deliberation efficacy.
Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.
Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram
2012-01-01
In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.
Role of optimization criterion in static asymmetric analysis of lumbar spine load.
Daniel, Matej
2011-10-01
A common method for load estimation in biomechanics is the inverse dynamics optimization, where the muscle activation pattern is found by minimizing or maximizing the optimization criterion. It has been shown that various optimization criteria predict remarkably similar muscle activation pattern and intra-articular contact forces during leg motion. The aim of this paper is to study the effect of the choice of optimization criterion on L4/L5 loading during static asymmetric loading. Upright standing with weight in one stretched arm was taken as a representative position. Musculoskeletal model of lumbar spine model was created from CT images of Visible Human Project. Several criteria were tested based on the minimization of muscle forces, muscle stresses, and spinal load. All criteria provide the same level of lumbar spine loading (difference is below 25%), except the criterion of minimum lumbar shear force which predicts unrealistically high spinal load and should not be considered further. Estimated spinal load and predicted muscle force activation pattern are in accordance with the intradiscal pressure measurements and EMG measurements. The L4/L5 spine loads 1312 N, 1674 N, and 1993 N were predicted for mass of weight in hand 2, 5, and 8 kg, respectively using criterion of mininum muscle stress cubed. As the optimization criteria do not considerably affect the spinal load, their choice is not critical in further clinical or ergonomic studies and computationally simpler criterion can be used.
Soft Clustering Criterion Functions for Partitional Document Clustering
2004-05-26
in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets
Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas
2016-01-01
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.
Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan
2017-01-01
This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.
Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud
Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819
Neuropathological diagnostic criteria for Alzheimer's disease.
Murayama, Shigeo; Saito, Yuko
2004-09-01
Neuropathological diagnostic criteria for Alzheimer's disease (AD) are based on tau-related pathology: NFT or neuritic plaques (NP). The Consortium to Establish a Registry for Alzheimer's disease (CERAD) criterion evaluates the highest density of neocortical NP from 0 (none) to C (abundant). Clinical documentation of dementia and NP stage A in younger cases, B in young old cases and C in older cases fulfils the criterion of AD. The CERAD criterion is most frequently used in clinical outcome studies because of its inclusion of clinical information. Braak and Braak's criterion evaluates the density and distribution of NFT and classifies them into: I/II, entorhinal; III/IV, limbic; and V/VI, neocortical stage. These three stages correspond to normal cognition, cognitive impairment and dementia, respectively. As Braak's criterion is based on morphological evaluation of the brain alone, this criterion is usually adopted in the research setting. The National Institute for Aging and Ronald and Nancy Reagan Institute of the Alzheimer's Association criterion combines these two criteria and categorizes cases into NFT V/VI and NP C, NFT III/IV and NP B, and NFT I/II and NP A, corresponding to high, middle and low probability of AD, respectively. As most AD cases in the aged population are categorized into Braak tangle stage IV and CERAD stage C, the usefulness of this criterion has not yet been determined. The combination of Braak's NFT stage equal to or above IV and Braak's senile plaque Stage C provides, arguably, the highest sensitivity and specificity. In future, the criteria should include in vivo dynamic neuropathological data, including 3D MRI, PET scan and CSF biomarkers, as well as more sensitive and specific immunohistochemical and immunochemical grading of AD.
Copula based flexible modeling of associations between clustered event times.
Geerdens, Candida; Claeskens, Gerda; Janssen, Paul
2016-07-01
Multivariate survival data are characterized by the presence of correlation between event times within the same cluster. First, we build multi-dimensional copulas with flexible and possibly symmetric dependence structures for such data. In particular, clustered right-censored survival data are modeled using mixtures of max-infinitely divisible bivariate copulas. Second, these copulas are fit by a likelihood approach where the vast amount of copula derivatives present in the likelihood is approximated by finite differences. Third, we formulate conditions for clustered right-censored survival data under which an information criterion for model selection is either weakly consistent or consistent. Several of the familiar selection criteria are included. A set of four-dimensional data on time-to-mastitis is used to demonstrate the developed methodology.
Qiao, Shan; Li, Xiaoming; Zhao, Guoxiang; Zhao, Junfeng; Stanton, Bonita
2014-07-01
To delineate the trajectories of loneliness and self-esteem over time among children affected by parental HIV and AIDS, and to examine how their perceived social support (PSS) influenced initial scores and change rates of these two psychological outcomes. We collected longitudinal data from children affected by parental HIV/AIDS in rural central China. Children 6-18 years of age at baseline were eligible to participate in the study and were assessed annually for 3 years. Multilevel regression models for change were used to assess the effect of baseline PSS on the trajectories of loneliness and self-esteem over time. We employed maximum likelihood estimates to fit multilevel models and specified the between-individual covariance matrix as 'unstructured' to allow correlation among the different sources of variance. Statistics including -2 Log Likelihood, Akaike Information Criterion and Bayesian Information Criterion were used in evaluating the model fit. The results of multilevel analyses indicated that loneliness scores significantly declined over time. Controlling for demographic characteristics, children with higher PSS reported significantly lower baseline loneliness score and experienced a slower rate of decline in loneliness over time. Children with higher PSS were more likely to report higher self-esteem scores at baseline. However, the self-esteem scores remained stable over time controlling for baseline PSS and all the other variables. The positive effect of PSS on psychological adjustment may imply a promising approach for future intervention among children affected by HIV/AIDS, in which efforts to promote psychosocial well being could focus on children and families with lower social support. We also call for a greater understanding of children's psychological adjustment process in various contexts of social support and appropriate adaptations of evidence-based interventions to meet their diverse needs.
Two-dimensional thermal modeling of power monolithic microwave integrated circuits (MMIC's)
NASA Technical Reports Server (NTRS)
Fan, Mark S.; Christou, Aris; Pecht, Michael G.
1992-01-01
Numerical simulations of the two-dimensional temperature distributions for a typical GaAs MMIC circuit are conducted, aiming at understanding the heat conduction process of the circuit chip and providing temperature information for device reliability analysis. The method used is to solve the two-dimensional heat conduction equation with a control-volume-based finite difference scheme. In particular, the effects of the power dissipation and the ambient temperature are examined, and the criterion for the worst operating environment is discussed in terms of the allowed highest device junction temperature.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-22
... least one, but no more than two, site-specific research projects to test innovative approaches to... Criterion; Disability and Rehabilitation Research Projects and Spinal Cord Injury Model Systems Centers and Multi-Site Collaborative Research Projects AGENCY: Office of Special Education and Rehabilitative...
Two related numerical codes, 3DFEMWATER and 3DLEWASTE, are presented sed to delineate wellhead protection areas in agricultural regions using the assimilative capacity criterion. DFEMWATER (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media) ...
Aging: Sensitivity versus Criterion in Taste Perception.
ERIC Educational Resources Information Center
Kushnir, T.; Shapira, N.
1983-01-01
Employed the signal-detection paradigm as a model for investigating age-related biological versus cognitive effects on perceptual behavior. Old and young subjects reported the presence or absence of sugar in threshold level solutions and tap water. Older subjects displayed a higher detection threshold and obtained a stricter criterion of decision.…
ERIC Educational Resources Information Center
Parry, Malcolm
1998-01-01
Explains a novel way of approaching centripetal force: theory is used to predict an orbital period at which a toy train will topple from a circular track. The demonstration has elements of prediction (a criterion for a good model) and suspense (a criterion for a good demonstration). The demonstration proved useful in undergraduate physics and…
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
NASA Astrophysics Data System (ADS)
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Debast, Inge; Rossi, Gina; van Alphen, S P J
2018-04-01
The alternative model for personality disorders in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ( DSM-5) is considered an important step toward a possibly better conceptualization of personality pathology in older adulthood, by the introduction of levels of personality functioning (Criterion A) and trait dimensions (Criterion B). Our main aim was to examine age-neutrality of the Short Form of the Severity Indices of Personality Problems (SIPP-SF; Criterion A) and Personality Inventory for DSM-5-Brief Form (PID-5-BF; Criterion B). Differential item functioning (DIF) analyses and more specifically the impact on scale level through differential test functioning (DTF) analyses made clear that the SIPP-SF was more age-neutral (6% DIF, only one of four domains showed DTF) than the PID-5-BF (25% DIF, all four tested domains had DTF) in a community sample of older and younger adults. Age differences in convergent validity also point in the direction of differences in underlying constructs. Concurrent and criterion validity in geriatric psychiatry inpatients suggest that both the SIPP-SF scales measuring levels of personality functioning (especially self-functioning) and the PID-5-BF might be useful screening measures in older adults despite age-neutrality not being confirmed.
Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A
2016-01-01
Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.
Juang, Wang-Chuan; Huang, Sin-Jhih; Huang, Fong-Dee; Cheng, Pei-Wen; Wann, Shue-Ren
2017-01-01
Objective Emergency department (ED) overcrowding is acknowledged as an increasingly important issue worldwide. Hospital managers are increasingly paying attention to ED crowding in order to provide higher quality medical services to patients. One of the crucial elements for a good management strategy is demand forecasting. Our study sought to construct an adequate model and to forecast monthly ED visits. Methods We retrospectively gathered monthly ED visits from January 2009 to December 2016 to carry out a time series autoregressive integrated moving average (ARIMA) analysis. Initial development of the model was based on past ED visits from 2009 to 2016. A best-fit model was further employed to forecast the monthly data of ED visits for the next year (2016). Finally, we evaluated the predicted accuracy of the identified model with the mean absolute percentage error (MAPE). The software packages SAS/ETS V.9.4 and Office Excel 2016 were used for all statistical analyses. Results A series of statistical tests showed that six models, including ARIMA (0, 0, 1), ARIMA (1, 0, 0), ARIMA (1, 0, 1), ARIMA (2, 0, 1), ARIMA (3, 0, 1) and ARIMA (5, 0, 1), were candidate models. The model that gave the minimum Akaike information criterion and Schwartz Bayesian criterion and followed the assumptions of residual independence was selected as the adequate model. Finally, a suitable ARIMA (0, 0, 1) structure, yielding a MAPE of 8.91%, was identified and obtained as Visitt=7111.161+(at+0.37462 at−1). Conclusion The ARIMA (0, 0, 1) model can be considered adequate for predicting future ED visits, and its forecast results can be used to aid decision-making processes. PMID:29196487
A model of the human supervisor
NASA Technical Reports Server (NTRS)
Kok, J. J.; Vanwijk, R. A.
1977-01-01
A general model of the human supervisor's behavior is given. Submechanisms of the model include: the observer/reconstructor; decision-making; and controller. A set of hypothesis is postulated for the relations between the task variables and the parameters of the different submechanisms of the model. Verification of the model hypotheses is considered using variations in the task variables. An approach is suggested for the identification of the model parameters which makes use of a multidimensional error criterion. Each of the elements of this multidimensional criterion corresponds to a certain aspect of the supervisor's behavior, and is directly related to a particular part of the model and its parameters. This approach offers good possibilities for an efficient parameter adjustment procedure.
NASA Astrophysics Data System (ADS)
Perekhodtseva, E. V.
2012-04-01
The results of the probability forecast methods of summer storm and hazard wind over territories of Russia and Europe are submitted at this paper. These methods use the hydrodynamic-statistical model of these phenomena. The statistical model was developed for the recognition of the situation involving these phenomena. For this perhaps the samples of the values of atmospheric parameters (n=40) for the presence and for the absence of these phenomena of storm and hazard wind were accumulated. The compressing of the predictors space without the information losses was obtained by special algorithm (k=7< 24m/s, the values of 75% 29m/s or the area of the tornado and strong squalls. The evaluation of this probability forecast was provided by criterion of Brayer. The estimation was successful and was equal for the European part of Russia B=0,37. The application of the probability forecast of storm and hazard winds allows to mitigate the economic losses when the errors of the first and second kinds of storm wind categorical forecast are not so small. A lot of examples of the storm wind probability forecast are submitted at this report.
Primi, Ricardo
2014-09-01
Ability testing has been criticized because understanding of the construct being assessed is incomplete and because the testing has not yet been satisfactorily improved in accordance with new knowledge from cognitive psychology. This article contributes to the solution of this problem through the application of item response theory and Susan Embretson's cognitive design system for test development in the development of a fluid intelligence scale. This study is based on findings from cognitive psychology; instead of focusing on the development of a test, it focuses on the definition of a variable for the creation of a criterion-referenced measure for fluid intelligence. A geometric matrix item bank with 26 items was analyzed with data from 2,797 undergraduate students. The main result was a criterion-referenced scale that was based on information from item features that were linked to cognitive components, such as storage capacity, goal management, and abstraction; this information was used to create the descriptions of selected levels of a fluid intelligence scale. The scale proposed that the levels of fluid intelligence range from the ability to solve problems containing a limited number of bits of information with obvious relationships through the ability to solve problems that involve abstract relationships under conditions that are confounded with an information overload and distraction by mixed noise. This scale can be employed in future research to provide interpretations for the measurements of the cognitive processes mastered and the types of difficulty experienced by examinees. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
Potential Singularity for a Family of Models of the Axisymmetric Incompressible Flow
NASA Astrophysics Data System (ADS)
Hou, Thomas Y.; Jin, Tianling; Liu, Pengfei
2017-03-01
We study a family of 3D models for the incompressible axisymmetric Euler and Navier-Stokes equations. The models are derived by changing the strength of the convection terms in the equations written using a set of transformed variables. The models share several regularity results with the Euler and Navier-Stokes equations, including an energy identity, the conservation of a modified circulation quantity, the BKM criterion and the Prodi-Serrin criterion. The inviscid models with weak convection are numerically observed to develop stable self-similar singularity with the singular region traveling along the symmetric axis, and such singularity scenario does not seem to persist for strong convection.
Zipper model for the melting of thin films
NASA Astrophysics Data System (ADS)
Abdullah, Mikrajuddin; Khairunnisa, Shafira; Akbar, Fathan
2016-01-01
We propose an alternative model to Lindemann’s criterion for melting that explains the melting of thin films on the basis of a molecular zipper-like mechanism. Using this model, a unique criterion for melting is obtained. We compared the results of the proposed model with experimental data of melting points and heat of fusion for many materials and obtained interesting results. The interesting thing reported here is how complex physics problems can sometimes be modeled with simple objects around us that seemed to have no correlation. This kind of approach is sometimes very important in physics education and should always be taught to undergraduate or graduate students.
Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.
1997-01-01
A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.
Particle-size distribution models for the conversion of Chinese data to FAO/USDA system.
Shangguan, Wei; Dai, YongJiu; García-Gutiérrez, Carlos; Yuan, Hua
2014-01-01
We investigated eleven particle-size distribution (PSD) models to determine the appropriate models for describing the PSDs of 16349 Chinese soil samples. These data are based on three soil texture classification schemes, including one ISSS (International Society of Soil Science) scheme with four data points and two Katschinski's schemes with five and six data points, respectively. The adjusted coefficient of determination r (2), Akaike's information criterion (AIC), and geometric mean error ratio (GMER) were used to evaluate the model performance. The soil data were converted to the USDA (United States Department of Agriculture) standard using PSD models and the fractal concept. The performance of PSD models was affected by soil texture and classification of fraction schemes. The performance of PSD models also varied with clay content of soils. The Anderson, Fredlund, modified logistic growth, Skaggs, and Weilbull models were the best.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
Optimal sensor placement for spatial lattice structure based on genetic algorithms
NASA Astrophysics Data System (ADS)
Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian
2008-10-01
Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.
Inference of gene regulatory networks from time series by Tsallis entropy
2011-01-01
Background The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 ≤ q ≤ 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/. PMID:21545720
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Building a maintenance policy through a multi-criterion decision-making model
NASA Astrophysics Data System (ADS)
Faghihinia, Elahe; Mollaverdi, Naser
2012-08-01
A major competitive advantage of production and service systems is establishing a proper maintenance policy. Therefore, maintenance managers should make maintenance decisions that best fit their systems. Multi-criterion decision-making methods can take into account a number of aspects associated with the competitiveness factors of a system. This paper presents a multi-criterion decision-aided maintenance model with three criteria that have more influence on decision making: reliability, maintenance cost, and maintenance downtime. The Bayesian approach has been applied to confront maintenance failure data shortage. Therefore, the model seeks to make the best compromise between these three criteria and establish replacement intervals using Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE II), integrating the Bayesian approach with regard to the preference of the decision maker to the problem. Finally, using a numerical application, the model has been illustrated, and for a visual realization and an illustrative sensitivity analysis, PROMETHEE GAIA (the visual interactive module) has been used. Use of PROMETHEE II and PROMETHEE GAIA has been made with Decision Lab software. A sensitivity analysis has been made to verify the robustness of certain parameters of the model.
Medical privacy protection based on granular computing.
Wang, Da-Wei; Liau, Churn-Jung; Hsu, Tsan-Sheng
2004-10-01
Based on granular computing methodology, we propose two criteria to quantitatively measure privacy invasion. The total cost criterion measures the effort needed for a data recipient to find private information. The average benefit criterion measures the benefit a data recipient obtains when he received the released data. These two criteria remedy the inadequacy of the deterministic privacy formulation proposed in Proceedings of Asia Pacific Medical Informatics Conference, 2000; Int J Med Inform 2003;71:17-23. Granular computing methodology provides a unified framework for these quantitative measurements and previous bin size and logical approaches. These two new criteria are implemented in a prototype system Cellsecu 2.0. Preliminary system performance evaluation is conducted and reviewed.
ERIC Educational Resources Information Center
Xi, Xiaoming
2008-01-01
Although the primary use of the speaking section of the Test of English as a Foreign Language™ Internet-based test (TOEFL® iBT Speaking test) is to inform admissions decisions at English medium universities, it may also be useful as an initial screening measure for international teaching assistants (ITAs). This study provides criterion-related…
Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process
NASA Astrophysics Data System (ADS)
Yan, Wei; Chang, Yuwen
2016-12-01
Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Romano, Nunzio
2016-01-01
This study explores the feasibility of identifying the effective soil hydraulic parameterization of a layered soil profile by using a conventional unsteady drainage experiment leading to field capacity. The flux-based field capacity criterion is attained by subjecting the soil profile to a synthetic drainage process implemented numerically in the Soil-Water-Atmosphere-Plant (SWAP) model. The effective hydraulic parameterization is associated to either aggregated or equivalent parameters, the former being determined by the geometrical scaling theory while the latter is obtained through the inverse modeling approach. Outcomes from both these methods depend on information that is sometimes difficult to retrieve at local scale and rather challenging or virtually impossible at larger scales. The only knowledge of topsoil hydraulic properties, for example, as retrieved by a near-surface field campaign or a data assimilation technique, is often exploited as a proxy to determine effective soil hydraulic parameterization at the largest spatial scales. Comparisons of the effective soil hydraulic characterization provided by these three methods are conducted by discussing the implications for their use and accounting for the trade-offs between required input information and model output reliability. To better highlight the epistemic errors associated to the different effective soil hydraulic properties and to provide some more practical guidance, the layered soil profiles are then grouped by using the FAO textural classes. For the moderately heterogeneous soil profiles available, all three approaches guarantee a general good predictability of the actual field capacity values and provide adequate identification of the effective hydraulic parameters. Conversely, worse performances are encountered for the highly variable vertical heterogeneity, especially when resorting to the "topsoil-only" information. In general, the best performances are guaranteed by the equivalent parameters, which might be considered a reference for comparisons with other techniques. As might be expected, the information content of the soil hydraulic properties pertaining only to the uppermost soil horizon is rather inefficient and also not capable to map out the hydrologic behavior of the real vertical soil heterogeneity since the drainage process is significantly affected by profile layering in almost all cases.
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong
2012-10-01
In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.
Castellano, Sergio; Cermelli, Paolo
2011-04-07
Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.
Selecting among competing models of electro-optic, infrared camera system range performance
Nichols, Jonathan M.; Hines, James E.; Nichols, James D.
2013-01-01
Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.
Genetic parameters for stayability to consecutive calvings in Zebu cattle.
Silva, D O; Santana, M L; Ayres, D R; Menezes, G R O; Silva, L O C; Nobre, P R C; Pereira, R J
2017-12-22
Longer-lived cows tend to be more profitable and the stayability trait is a selection criterion correlated to longevity. An alternative to the traditional approach to evaluate stayability is its definition based on consecutive calvings, whose main advantage is the more accurate evaluation of young bulls. However, no study using this alternative approach has been conducted for Zebu breeds. Therefore, the objective of this study was to compare linear random regression models to fit stayability to consecutive calvings of Guzerá, Nelore and Tabapuã cows and to estimate genetic parameters for this trait in the respective breeds. Data up to the eighth calving were used. The models included the fixed effects of age at first calving and year-season of birth of the cow and the random effects of contemporary group, additive genetic, permanent environmental and residual. Random regressions were modeled by orthogonal Legendre polynomials of order 1 to 4 (2 to 5 coefficients) for contemporary group, additive genetic and permanent environmental effects. Using Deviance Information Criterion as the selection criterion, the model with 4 regression coefficients for each effect was the most adequate for the Nelore and Tabapuã breeds and the model with 5 coefficients is recommended for the Guzerá breed. For Guzerá, heritabilities ranged from 0.05 to 0.08, showing a quadratic trend with a peak between the fourth and sixth calving. For the Nelore and Tabapuã breeds, the estimates ranged from 0.03 to 0.07 and from 0.03 to 0.08, respectively, and increased with increasing calving number. The additive genetic correlations exhibited a similar trend among breeds and were higher for stayability between closer calvings. Even between more distant calvings (second v. eighth), stayability showed a moderate to high genetic correlation, which was 0.77, 0.57 and 0.79 for the Guzerá, Nelore and Tabapuã breeds, respectively. For Guzerá, when the models with 4 or 5 regression coefficients were compared, the rank correlations between predicted breeding values for the intercept were always higher than 0.99, indicating the possibility of practical application of the least parameterized model. In conclusion, the model with 4 random regression coefficients is recommended for the genetic evaluation of stayability to consecutive calvings in Zebu cattle.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
Development and Validation of the Five-by-Five Resilience Scale.
DeSimone, Justin A; Harms, P D; Vanhove, Adam J; Herian, Mitchel N
2017-09-01
This article introduces a new measure of resilience and five related protective factors. The Five-by-Five Resilience Scale (5×5RS) is developed on the basis of theoretical and empirical considerations. Two samples ( N = 475 and N = 613) are used to assess the factor structure, reliability, convergent validity, and criterion-related validity of the 5×5RS. Confirmatory factor analysis supports a bifactor model. The 5×5RS demonstrates adequate internal consistency as evidenced by Cronbach's alpha and empirical reliability estimates. The 5×5RS correlates positively with the Connor-Davidson Resilience Scale (CD-RISC), a commonly used measure of resilience. The 5×5RS exhibits similar criterion-related validity to the CD-RISC as evidenced by positive correlations with satisfaction with life, meaning in life, and secure attachment style as well as negative correlations with rumination and anxious or avoidant attachment styles. 5×5RS scores are positively correlated with healthy behaviors such as exercise and negatively correlated with sleep difficulty and symptomology of anxiety and depression. The 5×5RS incrementally explains variance in some criteria above and beyond the CD-RISC. Item responses are modeled using the graded response model. Information estimates demonstrate the ability of the 5×5RS to assess individuals within at least one standard deviation of the mean on relevant latent traits.
Weight information labels on media models reduce body dissatisfaction in adolescent girls.
Veldhuis, Jolanda; Konijn, Elly A; Seidell, Jacob C
2012-06-01
To examine how weight information labels on variously sized media models affect (pre)adolescent girls' body perceptions and how they compare themselves with media models. We used a three (body shape: extremely thin vs. thin vs. normal weight) × three (information label: 6-kg underweight vs. 3-kg underweight vs. normal weight) experimental design in three age-groups (9-10 years, 12-13 years, and 15-16 years; n = 184). The girls completed questionnaires after exposure to media models. Weight information labels affected girls' body dissatisfaction, social comparison with media figures, and objectified body consciousness. Respondents exposed to an extremely thin body shape labeled to be of "normal weight" were most dissatisfied with their own bodies and showed highest levels of objectified body consciousness and comparison with media figures. An extremely thin body shape combined with a corresponding label (i.e., 6-kg underweight), however, induced less body dissatisfaction and less comparison with the media model. Age differences were also found to affect body perceptions: adolescent girls showed more negative body perceptions than preadolescents. Weight information labels may counteract the generally media-induced thin-body ideal. That is, when the weight labels appropriately informed the respondents about the actual thinness of the media model's body shape, girls were less affected. Weight information labels also instigated a normalization effect when a "normal-weight" label was attached to underweight-sized media models. Presenting underweight as a normal body shape, clearly increased body dissatisfaction in girls. Results also suggest age between preadolescence and adolescence as a critical criterion in responding to media models' body shape. Copyright © 2012 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
The information geometry of chaos
NASA Astrophysics Data System (ADS)
Cafaro, Carlo
2008-10-01
In this Thesis, we propose a new theoretical information-geometric framework (IGAC, Information Geometrodynamical Approach to Chaos) suitable to characterize chaotic dynamical behavior of arbitrary complex systems. First, the problem being investigated is defined; its motivation and relevance are discussed. The basic tools of information physics and the relevant mathematical tools employed in this work are introduced. The basic aspects of Entropic Dynamics (ED) are reviewed. ED is an information-constrained dynamics developed by Ariel Caticha to investigate the possibility that laws of physics---either classical or quantum---may emerge as macroscopic manifestations of underlying microscopic statistical structures. ED is of primary importance in our IGAC. The notion of chaos in classical and quantum physics is introduced. Special focus is devoted to the conventional Riemannian geometrodynamical approach to chaos (Jacobi geometrodynamics) and to the Zurek-Paz quantum chaos criterion of linear entropy growth. After presenting this background material, we show that the ED formalism is not purely an abstract mathematical framework, but is indeed a general theoretical scheme from which conventional Newtonian dynamics is obtained as a special limiting case. The major elements of our IGAC and the novel notion of information geometrodynamical entropy (IGE) are introduced by studying two "toy models". To illustrate the potential power of our IGAC, one application is presented. An information-geometric analogue of the Zurek-Paz quantum chaos criterion of linear entropy growth is suggested. Finally, concluding remarks emphasizing strengths and weak points of our approach are presented and possible further research directions are addressed. At this stage of its development, IGAC remains an ambitious unifying information-geometric theoretical construct for the study of chaotic dynamics with several unsolved problems. However, based on our recent findings, we believe it already provides an interesting, innovative and potentially powerful way to study and understand the very important and challenging problems of classical and quantum chaos.
Trial-to-trial carry-over of item- and relational-information in auditory short-term memory
Visscher, Kristina M.; Kahana, Michael J.; Sekuler, Robert
2009-01-01
Using a short-term recognition memory task we evaluated the carry-over across trials of two types of auditory information: the characteristics of individual study sounds (item information), and the relationships between the study sounds (relational information). On each trial, subjects heard two successive broadband study sounds and then decided whether a subsequently presented probe sound had been in the study set. On some trials, the probe item's similarity to stimuli presented on the preceding trial was manipulated. This item information interfered with recognition, increasing false alarms from 0.4% to 4.4%. Moreover, the interference was tuned so that only stimuli very similar to each other interfered. On other trials, the relationship among stimuli was manipulated in order to alter the criterion subjects used in making recognition judgments. The effect of this manipulation was confined to the very trial on which the criterion change was generated, and did not affect the subsequent trial. These results demonstrate the existence of a sharply-tuned carry-over of auditory item information, but no carry-over of relational information. PMID:19210080
Assessment of corneal properties based on statistical modeling of OCT speckle
Jesus, Danilo A.; Iskander, D. Robert
2016-01-01
A new approach to assess the properties of the corneal micro-structure in vivo based on the statistical modeling of speckle obtained from Optical Coherence Tomography (OCT) is presented. A number of statistical models were proposed to fit the corneal speckle data obtained from OCT raw image. Short-term changes in corneal properties were studied by inducing corneal swelling whereas age-related changes were observed analyzing data of sixty-five subjects aged between twenty-four and seventy-three years. Generalized Gamma distribution has shown to be the best model, in terms of the Akaike’s Information Criterion, to fit the OCT corneal speckle. Its parameters have shown statistically significant differences (Kruskal-Wallis, p < 0.001) for short and age-related corneal changes. In addition, it was observed that age-related changes influence the corneal biomechanical behaviour when corneal swelling is induced. This study shows that Generalized Gamma distribution can be utilized to modeling corneal speckle in OCT in vivo providing complementary quantified information where micro-structure of corneal tissue is of essence. PMID:28101409
[Evaluation and improvement of the management of informed consent in the emergency department].
del Pozo, P; García, J A; Escribano, M; Soria, V; Campillo-Soto, A; Aguayo-Albasini, J L
2009-01-01
To assess the preoperative management in our emergency surgical service and to improve the quality of the care provided to patients. In order to find the causes of non-compliance, the Ishikawa Fishbone diagram was used and eight assessment criteria were chosen. The first assessment includes 120 patients operated on from January to April 2007. Corrective measures were implemented, which consisted of meetings and conferences with doctors and nurses, insisting on the importance of the informed consent as a legal document which must be signed by patients, and the obligation of giving a copy to patients or relatives. The second assessment includes the period from July to October 2007 (n=120). We observed a high non-compliance of C1 signing of surgical consent (CRITERION 1: all patients or relatives have to sign the surgical informed consent for the operation to be performed [27.5%]) and C2 giving a copy of the surgical consent (CRITERION 2: all patients or relatives must have received a copy of the surgical informed consent for the Surgery to be performed [72.5%]) and C4 anaesthetic consent copy (CRITERION 4: all patients or relatives must have received a copy of the Anaesthesia informed consent corresponding to the operation performed [90%]). After implementing corrective measures a significant improvement was observed in the compliance of C2 and C4. In C1 there was an improvement without statistical significance. The carrying out of an improvement cycle enabled the main objective of this paper to be achieved: to improve the management of informed consent and the quality of the care and information provided to our patients.
Economic weights for genetic improvement of lactation persistency and milk yield.
Togashi, K; Lin, C Y
2009-06-01
This study aimed to establish a criterion for measuring the relative weight of lactation persistency (the ratio of yield at 280 d in milk to peak yield) in restricted selection index for the improvement of net merit comprising 3-parity total yield and total lactation persistency. The restricted selection index was compared with selection based on first-lactation total milk yield (I(1)), the first-two-lactation total yield (I(2)), and first-three-lactation total yield (I(3)). Results show that genetic response in net merit due to selection on restricted selection index could be greater than, equal to, or less than that due to the unrestricted index depending upon the relative weight of lactation persistency and the restriction level imposed. When the relative weight of total lactation persistency is equal to the criterion, the restricted selection index is equal to the selection method compared (I(1), I(2), or I(3)). The restricted selection index yielded a greater response when the relative weight of total lactation persistency was above the criterion, but a lower response when it was below the criterion. The criterion varied depending upon the restriction level (c) imposed and the selection criteria compared. A curvilinear relationship (concave curve) exists between the criterion and the restricted level. The criterion increases as the restriction level deviates in either direction from 1.5. Without prior information of the economic weight of lactation persistency, the imposition of the restriction level of 1.5 on lactation persistency would maximize change in net merit. The procedure presented allows for simultaneous modification of multi-parity lactation curves.
NASA Astrophysics Data System (ADS)
Wu, Jiangning; Wang, Xiaohuan
Rapidly increasing amount of mobile phone users and types of services leads to a great accumulation of complaining information. How to use this information to enhance the quality of customers' services is a big issue at present. To handle this kind of problem, the paper presents an approach to construct a domain knowledge map for navigating the explicit and tacit knowledge in two ways: building the Topic Map-based explicit knowledge navigation model, which includes domain TM construction, a semantic topic expansion algorithm and VSM-based similarity calculation; building Social Network Analysis-based tacit knowledge navigation model, which includes a multi-relational expert navigation algorithm and the criterions to evaluate the performance of expert networks. In doing so, both the customer managers and operators in call centers can find the appropriate knowledge and experts quickly and exactly. The experimental results show that the above method is very powerful for knowledge navigation.
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?
NASA Technical Reports Server (NTRS)
Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander
2016-01-01
Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.
Statistically Based Approach to Broadband Liner Design and Assessment
NASA Technical Reports Server (NTRS)
Jones, Michael G. (Inventor); Nark, Douglas M. (Inventor)
2016-01-01
A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.
Biomechanical investigation of naso-orbitoethmoid trauma by finite element analysis.
Huempfner-Hierl, Heike; Schaller, Andreas; Hemprich, Alexander; Hierl, Thomas
2014-11-01
Naso-orbitoethmoid fractures account for 5% of all facial fractures. We used data derived from a white 34-year-old man to make a transient dynamic finite element model, which consisted of about 740 000 elements, to simulate fist-like impacts to this anatomically complex area. Finite element analysis showed a pattern of von Mises stresses beyond the yield criterion of bone that corresponded with fractures commonly seen clinically. Finite element models can be used to simulate injuries to the human skull, and provide information about the pathogenesis of different types of fracture. Copyright © 2014 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Establishing a Spinal Injury Criterion for Military Seats
1997-01-01
Table represents 54 Trials (18 [phase I] + 36 [phase II]); "Combined Effects" of Delta V, Gpk & ATD Size illM-l A General Linear Model (GLM) analysis...5thpercentilemale AID would not have compliedwith the tolerance criterion under the higher impulse severity levels (i.e., 20 and 30 Gpk ). Similarly, the
On the upper bound in the Bohm sheath criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotelnikov, I. A., E-mail: I.A.Kotelnikov@inp.nsk.su; Skovorodin, D. I., E-mail: D.I.Skovorodin@inp.nsk.su
2016-02-15
The question is discussed about the existence of an upper bound in the Bohm sheath criterion, according to which the Debye sheath at the interface between plasma and a negatively charged electrode is stable only if the ion flow velocity in plasma exceeds the ion sound velocity. It is stated that, with an exception of some artificial ionization models, the Bohm sheath criterion is satisfied as an equality at the lower bound and the ion flow velocity is equal to the speed of sound. In the one-dimensional theory, a supersonic flow appears in an unrealistic model of a localized ionmore » source the size of which is less than the Debye length; however, supersonic flows seem to be possible in the two- and three-dimensional cases. In the available numerical codes used to simulate charged particle sources with a plasma emitter, the presence of the upper bound in the Bohm sheath criterion is not supposed; however, the correspondence with experimental data is usually achieved if the ion flow velocity in plasma is close to the ion sound velocity.« less
Mebane, C.A.
2010-01-01
Criteria to protect aquatic life are intended to protect diverse ecosystems, but in practice are usually developed from compilations of single-species toxicity tests using standard test organisms that were tested in laboratory environments. Species sensitivity distributions (SSDs) developed from these compilations are extrapolated to set aquatic ecosystem criteria. The protectiveness of the approach was critically reviewed with a chronic SSD for cadmium comprising 27 species within 21 genera. Within the data set, one genus had lower cadmium effects concentrations than the SSD fifth percentile-based criterion, so in theory this genus, the amphipod Hyalella, could be lost or at least allowed some level of harm by this criteria approach. However, population matrix modeling projected only slightly increased extinction risks for a temperate Hyalella population under scenarios similar to the SSD fifth percentile criterion. The criterion value was further compared to cadmium effects concentrations in ecosystem experiments and field studies. Generally, few adverse effects were inferred from ecosystem experiments at concentrations less than the SSD fifth percentile criterion. Exceptions were behavioral impairments in simplified food web studies. No adverse effects were apparent in field studies under conditions that seldom exceeded the criterion. At concentrations greater than the SSD fifth percentile, the magnitudes of adverse effects in the field studies were roughly proportional to the laboratory-based fraction of species with adverse effects in the SSD. Overall, the modeling and field validation comparisons of the chronic criterion values generally supported the relevance and protectiveness of the SSD fifth percentile approach with cadmium. ?? 2009 Society for Risk Analysis.
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
Noorbaloochi, Sharareh; Sharon, Dahlia; McClelland, James L
2015-08-05
We used electroencephalography (EEG) and behavior to examine the role of payoff bias in a difficult two-alternative perceptual decision under deadline pressure in humans. The findings suggest that a fast guess process, biased by payoff and triggered by stimulus onset, occurred on a subset of trials and raced with an evidence accumulation process informed by stimulus information. On each trial, the participant judged whether a rectangle was shifted to the right or left and responded by squeezing a right- or left-hand dynamometer. The payoff for each alternative (which could be biased or unbiased) was signaled 1.5 s before stimulus onset. The choice response was assigned to the first hand reaching a squeeze force criterion and reaction time was defined as time to criterion. Consistent with a fast guess account, fast responses were strongly biased toward the higher-paying alternative and the EEG exhibited an abrupt rise in the lateralized readiness potential (LRP) on a subset of biased payoff trials contralateral to the higher-paying alternative ∼ 150 ms after stimulus onset and 50 ms before stimulus information influenced the LRP. This rise was associated with poststimulus dynamometer activity favoring the higher-paying alternative and predicted choice and response time. Quantitative modeling supported the fast guess account over accounts of payoff effects supported in other studies. Our findings, taken with previous studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy for the integration of payoff and stimulus information. Humans and other animals often face situations in which they must make choices based on uncertain sensory information together with information about expected outcomes (gains or losses) about each choice. We investigated how differences in payoffs between available alternatives affect neural activity, overt choice, and the timing of choice responses. In our experiment, in which participants were under strong time pressure, neural and behavioral findings together with model fitting suggested that our human participants often made a fast guess toward the higher reward rather than integrating stimulus and payoff information. Our findings, taken with findings from other studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy. Copyright © 2015 the authors 0270-6474/15/3510989-23$15.00/0.
Passive Electroreception in Fish: AN Analog Model of the Spike Generation Zone.
NASA Astrophysics Data System (ADS)
Harvey, James Robert
Sensory transduction begins in receptor cells specialized to the sensory modality involved and proceeds to the more generalized stage of the first afferent fiber, converting the initial sensory information into neural spikes for transmittal to the central nervous system. We have developed a unique analog electronic model of the generalized step (also known as the spike generation zone (SGZ)) using a tunnel diode, an operational amplifier, resistors, and capacitors. With no externally applied simulated postsynaptic input current, our model represents a 10^{-3}cm^2 patch (100 times the typical in vivo area) of tonically active, nonadaptive, postsynaptic neural membrane that behaves as a pacemaker cell. Similar to the FitzHugh-Nagumo equations, our model is shown to be a simplification of the Hodgkin-Huxley parallel conductance model and can be analyzed by the methods of van der Pol. Measurements using the model yield results which compare favorably to physiological stimulus-response data gathered by Murray for elasmobranch electroreceptors. We then use the model to show that the main contribution to variance in the rate of neural spike output is provided by coincident inputs to the SGZ oscillator (i.e., by synaptic input noise) and not by inherent instability of the SGZ oscillator. Configured for maximum sensitivity, our model is capable of detecting stimulus changes as low as 50 fA in less than a second and this corresponds to a fractional frequency change of Delta f/f ~ 2 times 10^{-3}. Much data exists implying that in vivo detection of Delta f/f is limited to the range of one to ten percent (Weber-Fechner criterion). We propose the variance induced by the synaptic input noise provides a plausible physiological basis for the Weber-Fechner criterion.
Development of failure criterion for Kevlar-epoxy fabric laminates
NASA Technical Reports Server (NTRS)
Tennyson, R. C.; Elliott, W. G.
1984-01-01
The development of the tensor polynomial failure criterion for composite laminate analysis is discussed. In particular, emphasis is given to the fabrication and testing of Kevlar-49 fabric (Style 285)/Narmco 5208 Epoxy. The quadratic-failure criterion with F(12)=0 provides accurate estimates of failure stresses for the Kevlar/Epoxy investigated. The cubic failure criterion was re-cast into an operationally easier form, providing the engineer with design curves that can be applied to laminates fabricated from unidirectional prepregs. In the form presented no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exists at present to generalize this approach for all undirectional prepregs and its use must be restricted to the generic materials investigated to-date.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A
2010-08-20
The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods if information about the patient's diagnoses prior to the index hospitalization is available and used to code the risk adjusters. However, caution should be exercised in studies evaluating inpatient processes of care and where data on pre-index admission diagnoses are unavailable.
Comparative testing of dark matter models with 15 HSB and 15 LSB galaxies
NASA Astrophysics Data System (ADS)
Kun, E.; Keresztes, Z.; Simkó, A.; Szűcs, G.; Gergely, L. Á.
2017-12-01
Context. We assemble a database of 15 high surface brightness (HSB) and 15 low surface brightness (LSB) galaxies, for which surface brightness density and spectroscopic rotation curve data are both available and representative for various morphologies. We use this dataset to test the Navarro-Frenk-White, the Einasto, and the pseudo-isothermal sphere dark matter models. Aims: We investigate the compatibility of the pure baryonic model and baryonic plus one of the three dark matter models with observations on the assembled galaxy database. When a dark matter component improves the fit with the spectroscopic rotational curve, we rank the models according to the goodness of fit to the datasets. Methods: We constructed the spatial luminosity density of the baryonic component based on the surface brightness profile of the galaxies. We estimated the mass-to-light (M/L) ratio of the stellar component through a previously proposed color-mass-to-light ratio relation (CMLR), which yields stellar masses independent of the photometric band. We assumed an axissymetric baryonic mass model with variable axis ratios together with one of the three dark matter models to provide the theoretical rotational velocity curves, and we compared them with the dataset. In a second attempt, we addressed the question whether the dark component could be replaced by a pure baryonic model with fitted M/L ratios, varied over ranges consistent with CMLR relations derived from the available stellar population models. We employed the Akaike information criterion to establish the performance of the best-fit models. Results: For 7 galaxies (2 HSB and 5 LSB), neither model fits the dataset within the 1σ confidence level. For the other 23 cases, one of the models with dark matter explains the rotation curve data best. According to the Akaike information criterion, the pseudo-isothermal sphere emerges as most favored in 14 cases, followed by the Navarro-Frenk-White (6 cases) and the Einasto (3 cases) dark matter models. We find that the pure baryonic model with fitted M/L ratios falls within the 1σ confidence level for 10 HSB and 2 LSB galaxies, at the price of growing the M/Ls on average by a factor of two, but the fits are inferior compared to the best-fitting dark matter model.
A K-BKZ Formulation for Soft-Tissue Viscoelasticity
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Diethelm, Kai
2005-01-01
A viscoelastic model of the K-BKZ (Kaye 1962; Bernstein et al. 1963) type is developed for isotropic biological tissues, and applied to the fat pad of the human heel. To facilitate this pursuit, a class of elastic solids is introduced through a novel strain-energy function whose elements possess strong ellipticity, and therefore lead to stable material models. The standard fractional-order viscoelastic (FOV) solid is used to arrive at the overall elastic/viscoelastic structure of the model, while the elastic potential via the K-BKZ hypothesis is used to arrive at the tensorial structure of the model. Candidate sets of functions are proposed for the elastic and viscoelastic material functions present in the model, including a regularized fractional derivative that was determined to be the best. The Akaike information criterion (AIC) is advocated for performing multi-model inference, enabling an objective selection of the best material function from within a candidate set.
Freed, A D; Diethelm, K
2006-11-01
A viscoelastic model of the K-BKZ (Kaye, Technical Report 134, College of Aeronautics, Cranfield 1962; Bernstein et al., Trans Soc Rheol 7: 391-410, 1963) type is developed for isotropic biological tissues and applied to the fat pad of the human heel. To facilitate this pursuit, a class of elastic solids is introduced through a novel strain-energy function whose elements possess strong ellipticity, and therefore lead to stable material models. This elastic potential - via the K-BKZ hypothesis - also produces the tensorial structure of the viscoelastic model. Candidate sets of functions are proposed for the elastic and viscoelastic material functions present in the model, including two functions whose origins lie in the fractional calculus. The Akaike information criterion is used to perform multi-model inference, enabling an objective selection to be made as to the best material function from within a candidate set.
Two-component gravitational instability in spiral galaxies
NASA Astrophysics Data System (ADS)
Marchuk, A. A.; Sotnikova, N. Y.
2018-04-01
We applied a criterion of gravitational instability, valid for two-component and infinitesimally thin discs, to observational data along the major axis for seven spiral galaxies of early types. Unlike most papers, the dispersion equation corresponding to the criterion was solved directly without using any approximation. The velocity dispersion of stars in the radial direction σR was limited by the range of possible values instead of a fixed value. For all galaxies, the outer regions of the disc were analysed up to R ≤ 130 arcsec. The maximal and sub-maximal disc models were used to translate surface brightness into surface density. The largest destabilizing disturbance stars can exert on a gaseous disc was estimated. It was shown that the two-component criterion differs a little from the one-fluid criterion for galaxies with a large surface gas density, but it allows to explain large-scale star formation in those regions where the gaseous disc is stable. In the galaxy NGC 1167 star formation is entirely driven by the self-gravity of the stars. A comparison is made with the conventional approximations which also include the thickness effect and with models for different sound speed cg. It is shown that values of the effective Toomre parameter correspond to the instability criterion of a two-component disc Qeff < 1.5-2.5. This result is consistent with previous theoretical and observational studies.
A feedback control model for network flow with multiple pure time delays
NASA Technical Reports Server (NTRS)
Press, J.
1972-01-01
A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.
How Can We Get the Information about Democracy? The Example of Social Studies Prospective Teachers
ERIC Educational Resources Information Center
Tonga, Deniz
2014-01-01
In this research, the information about democracy, which social studies prospective teachers have, and interpretation of the information sources are aimed. The research was planned as a survey research methodology and the participants were determined with criterion sampling method. The data were collected through developed open-ended questions…
A new class of problems in the calculus of variations
NASA Astrophysics Data System (ADS)
Ekeland, Ivar; Long, Yiming; Zhou, Qinglong
2013-11-01
This paper investigates an infinite-horizon problem in the one-dimensional calculus of variations, arising from the Ramsey model of endogeneous economic growth. Following Chichilnisky, we introduce an additional term, which models concern for the well-being of future generations. We show that there are no optimal solutions, but that there are equilibrium strateges, i.e. Nash equilibria of the leader-follower game between successive generations. To solve the problem, we approximate the Chichilnisky criterion by a biexponential criterion, we characterize its equilibria by a pair of coupled differential equations of HJB type, and we go to the limit. We find all the equilibrium strategies for the Chichilnisky criterion. The mathematical analysis is difficult because one has to solve an implicit differential equation in the sense of Thom. Our analysis extends earlier work by Ekeland and Lazrak.
NASA Astrophysics Data System (ADS)
Aljoumani, Basem; Kluge, Björn; sanchez, Josep; Wessolek, Gerd
2017-04-01
Highways and main roads are potential sources of contamination for the surrounding environment. High traffic rates result in elevated heavy metal concentrations in road runoff, soil and water seepage, which has attracted much attention in the recent past. Prediction of heavy metals transfer near the roadside into deeper soil layers are very important to prevent the groundwater pollution. This study was carried out on data of a number of lysimeters which were installed along the A115 highway (Germany) with a mean daily traffic of 90.000 vehicles per day. Three polyethylene (PE) lysimeters were installed at the A115 highway. They have the following dimensions: length 150 cm, width 100 cm, height 60 cm. The lysimeters were filled with different soil materials, which were recently used for embankment construction in Germany. With the obtained data, we will develop a time series analysis model to predict total and dissolved metal concentration in road runoff and in soil solution of the roadside embankments. The time series consisted of monthly measurements of heavy metals and was transformed to a stationary situation. Subsequently, the transformed data will be used to conduct analyses in the time domain in order to obtain the parameters of a seasonal autoregressive integrated moving average (ARIMA) model. Four phase approaches for identifying and fitting ARIMA models will be used: identification, parameter estimation, diagnostic checking, and forecasting. An automatic selection criterion, such as the Akaike information criterion, will use to enhance this flexible approach to model building
Relating DSM-5 section III personality traits to section II personality disorder diagnoses.
Morey, L C; Benson, K T; Skodol, A E
2016-02-01
The DSM-5 Personality and Personality Disorders Work Group formulated a hybrid dimensional/categorical model that represented personality disorders as combinations of core impairments in personality functioning with specific configurations of problematic personality traits. Specific clusters of traits were selected to serve as indicators for six DSM categorical diagnoses to be retained in this system - antisocial, avoidant, borderline, narcissistic, obsessive-compulsive and schizotypal personality disorders. The goal of the current study was to describe the empirical relationships between the DSM-5 section III pathological traits and DSM-IV/DSM-5 section II personality disorder diagnoses. Data were obtained from a sample of 337 clinicians, each of whom rated one of his or her patients on all aspects of the DSM-IV and DSM-5 proposed alternative model. Regression models were constructed to examine trait-disorder relationships, and the incremental validity of core personality dysfunctions (i.e. criterion A features for each disorder) was examined in combination with the specified trait clusters. Findings suggested that the trait assignments specified by the Work Group tended to be substantially associated with corresponding DSM-IV concepts, and the criterion A features provided additional diagnostic information in all but one instance. Although the DSM-5 section III alternative model provided a substantially different taxonomic structure for personality disorders, the associations between this new approach and the traditional personality disorder concepts in DSM-5 section II make it possible to render traditional personality disorder concepts using alternative model traits in combination with core impairments in personality functioning.
Two-phase strategy of controlling motor coordination determined by task performance optimality.
Shimansky, Yury P; Rand, Miya K
2013-02-01
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model's utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.
NASA Astrophysics Data System (ADS)
Kou, Jiaqing; Le Clainche, Soledad; Zhang, Weiwei
2018-01-01
This study proposes an improvement in the performance of reduced-order models (ROMs) based on dynamic mode decomposition to model the flow dynamics of the attractor from a transient solution. By combining higher order dynamic mode decomposition (HODMD) with an efficient mode selection criterion, the HODMD with criterion (HODMDc) ROM is able to identify dominant flow patterns with high accuracy. This helps us to develop a more parsimonious ROM structure, allowing better predictions of the attractor dynamics. The method is tested in the solution of a NACA0012 airfoil buffeting in a transonic flow, and its good performance in both the reconstruction of the original solution and the prediction of the permanent dynamics is shown. In addition, the robustness of the method has been successfully tested using different types of parameters, indicating that the proposed ROM approach is a tool promising for using in both numerical simulations and experimental data.
Data-Driven Learning of Q-Matrix
Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2013-01-01
The recent surge of interests in cognitive assessment has led to developments of novel statistical models for diagnostic classification. Central to many such models is the well-known Q-matrix, which specifies the item–attribute relationships. This article proposes a data-driven approach to identification of the Q-matrix and estimation of related model parameters. A key ingredient is a flexible T-matrix that relates the Q-matrix to response patterns. The flexibility of the T-matrix allows the construction of a natural criterion function as well as a computationally amenable algorithm. Simulations results are presented to demonstrate usefulness and applicability of the proposed method. Extension to handling of the Q-matrix with partial information is presented. The proposed method also provides a platform on which important statistical issues, such as hypothesis testing and model selection, may be formally addressed. PMID:23926363
Discrete time modeling and stability analysis of TCP Vegas
NASA Astrophysics Data System (ADS)
You, Byungyong; Koo, Kyungmo; Lee, Jin S.
2007-12-01
This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.
Examination of DSM-5 Section III avoidant personality disorder in a community sample.
Sellbom, Martin; Carmichael, Kieran L C; Liggett, Jacqueline
2017-11-01
The current research evaluated the continuity between DSM-5 Section II and Section III diagnostic operationalizations of avoidant personality disorder (AvPD). More specifically, the study had three aims: (1) to examine which personality constructs comprise the optimal trait constellation for AvPD; (2) to investigate the utility of the proposed structure of the Section III AvPD diagnosis, in regard to combining functional impairment (criterion A) and a dimensional measure of personality (criterion B) variables; and (3) to determine whether AvPD-specific impairment confers incremental meaningful contribution above and beyond general impairment in personality functioning. A mixed sample of 402 university and community participants was recruited, and they were administered multiple measures of Section II PD, personality traits, and personality impairment. A latent measurement model approach was used to analyse data. Results supported the general continuity between Section II and Section III of the DSM-5; however, three of the four main criterion B traits were the stronger predictors. There was also some support for the trait unassertiveness augmenting the criterion B trait profile. The combination of using functional impairment criteria (criterion A) and dimensional personality constructs (criterion B) in operationalizing AvPD was supported; however, the reliance of disorder-specific over general impairment for criterion A was not supported. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Why is the dog an ideal model for aging research?
Gilmore, Keiva M; Greer, Kimberly A
2015-11-01
With many caveats to the traditional vertebrate species pertaining to biogerontology investigations, it has been suggested that a most informative model is the one which: 1) examines closely related species, or various members of the same species with naturally occurring lifespan variation, 2) already has adequate medical procedures developed, 3) has a well annotated genome, 4) does not require artificial housing, and can live in its natural environment while being investigated, and 5) allows considerable information to be gathered within a relatively short period of time. The domestic dog unsurprisingly fits each criterion mentioned. The dog has already become a key model system in which to evaluate surgical techniques and novel medications because of the remarkable similarity between human and canine conditions, treatments, and response to therapy. The dog naturally serves as a disease model for study, obviating the need to construct artificial genetically modified examples of disease. Just as the dog offers a natural model for human conditions and diseases, simple observation leads to the conclusion that the canine aging phenotype also mimics that of the human. Genotype information, biochemical information pertaining to the GH/IGF-1 pathway, and some limited longitudinal investigations have begun the establishment of the domestic dog as a model of aging. Although we find that dogs indeed are a model to study aging and there are many independent pieces of canine aging data, there are many more "open" areas, ripe for investigation. Published by Elsevier Inc.
Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation
NASA Astrophysics Data System (ADS)
Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.
2017-10-01
A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.
Tracy, J I; Pinsk, M; Helverson, J; Urban, G; Dietz, T; Smith, D J
2001-08-01
The link between automatic and effortful processing and nonanalytic and analytic category learning was evaluated in a sample of 29 college undergraduates using declarative memory, semantic category search, and pseudoword categorization tasks. Automatic and effortful processing measures were hypothesized to be associated with nonanalytic and analytic categorization, respectively. Results suggested that contrary to prediction strong criterion-attribute (analytic) responding on the pseudoword categorization task was associated with strong automatic, implicit memory encoding of frequency-of-occurrence information. Data are discussed in terms of the possibility that criterion-attribute category knowledge, once established, may be expressed with few attentional resources. The data indicate that attention resource requirements, even for the same stimuli and task, vary depending on the category rule system utilized. Also, the automaticity emerging from familiarity with analytic category exemplars is very different from the automaticity arising from extensive practice on a semantic category search task. The data do not support any simple mapping of analytic and nonanalytic forms of category learning onto the automatic and effortful processing dichotomy and challenge simple models of brain asymmetries for such procedures. Copyright 2001 Academic Press.
Combining SVM and flame radiation to forecast BOF end-point
NASA Astrophysics Data System (ADS)
Wen, Hongyuan; Zhao, Qi; Xu, Lingfei; Zhou, Munchun; Chen, Yanru
2009-05-01
Because of complex reactions in Basic Oxygen Furnace (BOF) for steelmaking, the main end-point control methods of steelmaking have insurmountable difficulties. Aiming at these problems, a support vector machine (SVM) method for forecasting the BOF steelmaking end-point is presented based on flame radiation information. The basis is that the furnace flame is the performance of the carbon oxygen reaction, because the carbon oxygen reaction is the major reaction in the steelmaking furnace. The system can acquire spectrum and image data quickly in the steelmaking adverse environment. The structure of SVM and the multilayer feed-ward neural network are similar, but SVM model could overcome the inherent defects of the latter. The model is trained and forecasted by using SVM and some appropriate variables of light and image characteristic information. The model training process follows the structure risk minimum (SRM) criterion and the design parameter can be adjusted automatically according to the sampled data in the training process. Experimental results indicate that the prediction precision of the SVM model and the executive time both meet the requirements of end-point judgment online.
Measures and Interpretations of Vigilance Performance: Evidence Against the Detection Criterion
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1998-01-01
Operators' performance in a vigilance task is often assumed to depend on their choice of a detection criterion. When the signal rate is low this criterion is set high, causing the hit and false alarm rates to be low. With increasing time on task the criterion presumably tends to increase even further, thereby further decreasing the hit and false alarm rates. Virtually all of the empirical evidence for this simple interpretation is based on estimates of the bias measure Beta from signal detection theory. In this article, I describe a new approach to studying decision making that does not require the technical assumptions of signal detection theory. The results of this new analysis suggest that the detection criterion is never biased toward either response, even when the signal rate is low and the time on task is long. Two modifications of the signal detection theory framework are considered to account for this seemingly paradoxical result. The first assumes that the signal rate affects the relative sizes of the variances of the information distributions; the second assumes that the signal rate affects the logic of the operator's stopping rule. Actual or potential applications of this research include the improved training and performance assessment of operators in areas such as product quality control, air traffic control, and medical and clinical diagnosis.
Nozari, Nazbanou; Hepner, Christopher R
2018-06-05
Competitive accounts of lexical selection propose that the activation of competitors slows down the selection of the target. Non-competitive accounts, on the other hand, posit that target response latencies are independent of the activation of competing items. In this paper, we propose a signal detection framework for lexical selection and show how a flexible selection criterion affects claims of competitive selection. Specifically, we review evidence from neurotypical and brain-damaged speakers and demonstrate that task goals and the state of the production system determine whether a competitive or a non-competitive selection profile arises. We end by arguing that there is conclusive evidence for a flexible criterion in lexical selection, and that integrating criterion shifts into models of language production is critical for evaluating theoretical claims regarding (non-)competitive selection.
Event-based cluster synchronization of coupled genetic regulatory networks
NASA Astrophysics Data System (ADS)
Yue, Dandan; Guan, Zhi-Hong; Li, Tao; Liao, Rui-Quan; Liu, Feng; Lai, Qiang
2017-09-01
In this paper, the cluster synchronization of coupled genetic regulatory networks with a directed topology is studied by using the event-based strategy and pinning control. An event-triggered condition with a threshold consisting of the neighbors' discrete states at their own event time instants and a state-independent exponential decay function is proposed. The intra-cluster states information and extra-cluster states information are involved in the threshold in different ways. By using the Lyapunov function approach and the theories of matrices and inequalities, we establish the cluster synchronization criterion. It is shown that both the avoidance of continuous transmission of information and the exclusion of the Zeno behavior are ensured under the presented triggering condition. Explicit conditions on the parameters in the threshold are obtained for synchronization. The stability criterion of a single GRN is also given under the reduced triggering condition. Numerical examples are provided to validate the theoretical results.
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.
Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan
2006-08-01
An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.
Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing
NASA Astrophysics Data System (ADS)
Rabbitt, Christopher
This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.
Application of Single Crystal Failure Criteria: Theory and Turbine Blade Case Study
NASA Technical Reports Server (NTRS)
Sayyah, Tarek; Swanson, Gregory R.; Schonberg, W. P.
1999-01-01
The orientation of the single crystal material within a structural component is known to affect the strength and life of the part. The first stage blade of the High Pressure Fuel Turbopump (HPFTP)/ Alternative Turbopump Development (ATD), of the Space Shuttle Main Engine (SSME) was used to study the effects of secondary axis'orientation angles on the failure rate of the blade. A new failure criterion was developed based on normal and shear strains on the primary crystallographic planes. The criterion was verified using low cycle fatigue (LCF) specimen data and a finite element model of the test specimens. The criterion was then used to study ATD/HPFTP first stage blade failure events. A detailed ANSYS finite element model of the blade was used to calculate the failure parameter for the different crystallographic orientations. A total of 297 cases were run to cover a wide range of acceptable orientations within the blade. Those orientations are related to the base crystallographic coordinate system that was created in the ANSYS finite element model. Contour plots of the criterion as a function of orientation for the blade tip and attachment were obtained. Results of the analysis revealed a 40% increase in the failure parameter due to changing of the primary and secondary axes of material orientations. A comparison between failure criterion predictions and actual engine test data was then conducted. The engine test data comes from two ATD/HPFTP builds (units F3- 4B and F6-5D), which were ground tested on the SSME at the Stennis Space Center in Mississippi. Both units experienced cracking of the airfoil tips in multiple blades, but only a few cracks grew all the way across the wall of the hollow core airfoil.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Kandhasamy, Chandrasekaran; Ghosh, Kaushik
2017-02-01
Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.
Taki, Yasuyuki; Hashizume, Hiroshi; Thyreau, Benjamin; Sassa, Yuko; Takeuchi, Hikaru; Wu, Kai; Kotozaki, Yuka; Nouchi, Rui; Asano, Michiko; Asano, Kohei; Fukuda, Hiroshi; Kawashima, Ryuta
2013-08-01
We examined linear and curvilinear correlations of gray matter volume and density in cortical and subcortical gray matter with age using magnetic resonance images (MRI) in a large number of healthy children. We applied voxel-based morphometry (VBM) and region-of-interest (ROI) analyses with the Akaike information criterion (AIC), which was used to determine the best-fit model by selecting which predictor terms should be included. We collected data on brain structural MRI in 291 healthy children aged 5-18 years. Structural MRI data were segmented and normalized using a custom template by applying the diffeomorphic anatomical registration using exponentiated lie algebra (DARTEL) procedure. Next, we analyzed the correlations of gray matter volume and density with age in VBM with AIC by estimating linear, quadratic, and cubic polynomial functions. Several regions such as the prefrontal cortex, the precentral gyrus, and cerebellum showed significant linear or curvilinear correlations between gray matter volume and age on an increasing trajectory, and between gray matter density and age on a decreasing trajectory in VBM and ROI analyses with AIC. Because the trajectory of gray matter volume and density with age suggests the progress of brain maturation, our results may contribute to clarifying brain maturation in healthy children from the viewpoint of brain structure. Copyright © 2012 Wiley Periodicals, Inc.
A Primer for Model Selection: The Decisive Role of Model Complexity
NASA Astrophysics Data System (ADS)
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach
NASA Astrophysics Data System (ADS)
Chu, A.; Ogata, Y.; Katsura, K.
2010-12-01
For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.
Test Design Optimization in CAT Early Stage with the Nominal Response Model
ERIC Educational Resources Information Center
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E.
2007-01-01
The early stage of computerized adaptive testing (CAT) refers to the phase of the trait estimation during the administration of only a few items. This phase can be characterized by bias and instability of estimation. In this study, an item selection criterion is introduced in an attempt to lessen this instability: the D-optimality criterion. A…
ERIC Educational Resources Information Center
Cantor, Jeffrey A.; Hobson, Edward N.
The development of a test design methodology used to construct a criterion-referenced System Achievement Test (CR-SAT) for selected Naval enlisted classification (NEC) in the Strategic Weapon System (SWS) of the United States Navy is described. Subject matter experts, training data analysts and educational specialists developed a comprehensive…
ERIC Educational Resources Information Center
Webb, Leland F.
The purpose of this study was to confirm or deny Carry's findings in an earlier Aptitude Treatment Interaction (ATI) study by implementing his suggestions to: (1) revise instructional treatments, (2) improve the criterion measures, (3) use four predictor tests, (4) add time to criterion measure, and (5) use a theoretical model to identify relevant…
A Case for Transforming the Criterion of a Predictive Validity Study
ERIC Educational Resources Information Center
Patterson, Brian F.; Kobrin, Jennifer L.
2011-01-01
This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
Feature combinations and the divergence criterion
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Mayekar, S. M.
1976-01-01
Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.
2016-01-01
Reports an error in "A violation of the conditional independence assumption in the two-high-threshold model of recognition memory" by Tina Chen, Jeffrey J. Starns and Caren M. Rotello (Journal of Experimental Psychology: Learning, Memory, and Cognition, 2015[Jul], Vol 41[4], 1215-1222). In the article, Chen et al. compared three models: a continuous signal detection model (SDT), a standard two-high-threshold discrete-state model in which detect states always led to correct responses (2HT), and a full-mapping version of the 2HT model in which detect states could lead to either correct or incorrect responses. After publication, Rani Moran (personal communication, April 21, 2015) identified two errors that impact the reported fit statistics for the Bayesian information criterion (BIC) metric of all models as well as the Akaike information criterion (AIC) results for the full-mapping model. The errors are described in the erratum. (The following abstract of the original article appeared in record 2014-56216-001.) The 2-high-threshold (2HT) model of recognition memory assumes that test items result in distinct internal states: they are either detected or not, and the probability of responding at a particular confidence level that an item is "old" or "new" depends on the state-response mapping parameters. The mapping parameters are independent of the probability that an item yields a particular state (e.g., both strong and weak items that are detected as old have the same probability of producing a highest-confidence "old" response). We tested this conditional independence assumption by presenting nouns 1, 2, or 4 times. To maximize the strength of some items, "superstrong" items were repeated 4 times and encoded in conjunction with pleasantness, imageability, anagram, and survival processing tasks. The 2HT model failed to simultaneously capture the response rate data for all item classes, demonstrating that the data violated the conditional independence assumption. In contrast, a Gaussian signal detection model, which posits that the level of confidence that an item is "old" or "new" is a function of its continuous strength value, provided a good account of the data. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Orbital Evasive Target Tracking and Sensor Management
2012-03-30
maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game theoretic criterion where...tracking with multiple space borne observers. The results indicate that the game theoretic approach is more effective than the information based approach in...sensor management is to maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game
NASA Astrophysics Data System (ADS)
Sigmund, Peter
The mean equililibrium charge of a penetrating ion can be estimated on the basis of Bohr's velocity criterion or Lamb's energy criterion. Qualitative and quantitative results are derived on the basis of the Thomas-Fermi model of the atom, which is discussed explicitly. This includes a brief introduction to the Thomas-Fermi-Dirac model. Special attention is paid to trial function approaches by Lenz and Jensen as well as Brandt and Kitagawa. The chapter also offers a preliminary discussion of the role of the stopping medium, gas-solid differences, and a survey of data compilations.
Orthotropic elasto-plastic behavior of AS4/APC-2 thermoplastic composite in compression
NASA Technical Reports Server (NTRS)
Sun, C. T.; Rui, Y.
1989-01-01
Uniaxial compression tests were performed on off-axis coupon specimens of unidirectional AS4/APC-2 thermoplastic composite at various temperatures. The elasto-plastic and strength properties of AS4/APC-2 composite were characterized with respect to temperature variation by using a one-parameter orthotropic plasticity model and a one-parameter failure criterion. Experimental results show that the orthotropic plastic behavior can be characterized quite well using the plasticity model, and the matrix-dominant compressive strengths can be predicted very accurately by the one-parameter failure criterion.
On thermonuclear ignition criterion at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Baolian; Kwan, Thomas J. T.; Wang, Yi-Ming
2014-10-15
Sustained thermonuclear fusion at the National Ignition Facility remains elusive. Although recent experiments approached or exceeded the anticipated ignition thresholds, the nuclear performance of the laser-driven capsules was well below predictions in terms of energy and neutron production. Such discrepancies between expectations and reality motivate a reassessment of the physics of ignition. We have developed a predictive analytical model from fundamental physics principles. Based on the model, we obtained a general thermonuclear ignition criterion in terms of the areal density and temperature of the hot fuel. This newly derived ignition threshold and its alternative forms explicitly show the minimum requirementsmore » of the hot fuel pressure, mass, areal density, and burn fraction for achieving ignition. Comparison of our criterion with existing theories, simulations, and the experimental data shows that our ignition threshold is more stringent than those in the existing literature and that our results are consistent with the experiments.« less
Thermal Signature Identification System (TheSIS)
NASA Technical Reports Server (NTRS)
Merritt, Scott; Bean, Brian
2015-01-01
We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.
A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection
Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B
2015-01-01
Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050
Persoskie, Alexander; Nguyen, Anh B.; Kaufman, Annette R.; Tworek, Cindy
2017-01-01
Beliefs about the relative harmfulness of one product compared to another (perceived relative harm) are central to research and regulation concerning tobacco and nicotine-containing products, but techniques for measuring such beliefs vary widely. We compared the validity of direct and indirect measures of perceived harm of e-cigarettes and smokeless tobacco (SLT) compared to cigarettes. On direct measures, participants explicitly compare the harmfulness of each product. On indirect measures, participants rate the harmfulness of each product separately, and ratings are compared. The U.S. Health Information National Trends Survey (HINTS-FDA-2015; N=3738) included direct measures of perceived harm of e-cigarettes and SLT compared to cigarettes. Indirect measures were created by comparing ratings of harm from e-cigarettes, SLT, and cigarettes on 3-point scales. Logistic regressions tested validity by assessing whether direct and indirect measures were associated with criterion variables including: ever-trying e-cigarettes, ever-trying snus, and SLT use status. Compared to the indirect measures, the direct measures of harm were more consistently associated with criterion variables. On direct measures, 26% of adults rated e-cigarettes as less harmful than cigarettes, and 11% rated SLT as less harmful than cigarettes. Direct measures appear to provide valid information about individuals’ harm beliefs, which may be used to inform research and tobacco control policy. Further validation research is encouraged. PMID:28073035
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
Discrete-time model reduction in limited frequency ranges
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Juang, Jer-Nan; Longman, Richard W.
1991-01-01
A mathematical formulation for model reduction of discrete time systems such that the reduced order model represents the system in a particular frequency range is discussed. The algorithm transforms the full order system into balanced coordinates using frequency weighted discrete controllability and observability grammians. In this form a criterion is derived to guide truncation of states based on their contribution to the frequency range of interest. Minimization of the criterion is accomplished without need for numerical optimization. Balancing requires the computation of discrete frequency weighted grammians. Close form solutions for the computation of frequency weighted grammians are developed. Numerical examples are discussed to demonstrate the algorithm.
Restoration of STORM images from sparse subset of localizations (Conference Presentation)
NASA Astrophysics Data System (ADS)
Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.
2016-02-01
To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.
Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array
NASA Astrophysics Data System (ADS)
Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.
2018-01-01
The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.
Human striatal activation during adjustment of the response criterion in visual word recognition.
Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred
2011-02-01
Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.
Prediction of Hot Tearing Using a Dimensionless Niyama Criterion
NASA Astrophysics Data System (ADS)
Monroe, Charles; Beckermann, Christoph
2014-08-01
The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.
An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hatami, M. M., E-mail: m-hatami@kntu.ac.ir
2015-04-15
The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, itmore » reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically.« less
A new criterion needed to evaluate reliability of digital protective relays
NASA Astrophysics Data System (ADS)
Gurevich, Vladimir
2012-11-01
There is a wide range of criteria and features for evaluating reliability in engineering; but as many as there are, only one of them has been chosen to evaluate reliability of Digital Protective Relays (DPR) in the technical documentation: Mean (operating) Time Between Failures (MTBF), which has gained universal currency and has been specified in technical manuals, information sheets, tender documentation as the key indicator of DPR reliability. But is the choice of this criterion indeed wise? The answer to this question is being sought by the author of this article.
Evaluation of failure criterion for graphite/epoxy fabric laminates
NASA Technical Reports Server (NTRS)
Tennyson, R. C.; Wharram, G. E.
1985-01-01
The development and application of the tensor polynomial failure criterion for composite laminate analysis is described. Emphasis is given to the fabrication and testing of Narmco Rigidite 5208-WT300, a plain weave fabric of Thornel 300 Graphite fibers impregnated with Narmco 5208 Resin. The quadratic-failure criterion with F sub 12=0 provides accurate estimates of failure stresses for the graphite/epoxy investigated. The cubic failure criterion was recast into an operationally easier form, providing design curves that can be applied to laminates fabricated from orthotropic woven fabric prepregs. In the form presented, no interaction strength tests are required, although recourse to the quadratic model and the principal strength parameters is necessary. However, insufficient test data exist at present to generalize this approach for all prepreg constructions, and its use must be restricted to the generic materials and configurations investigated to date.
Convective instabilities in SN 1987A
NASA Technical Reports Server (NTRS)
Benz, Willy; Thielemann, Friedrich-Karl
1990-01-01
Following Bandiera (1984), it is shown that the relevant criterion to determine the stability of a blast wave, propagating through the layers of a massive star in a supernova explosion, is the Schwarzschild (or Ledoux) criterion rather than the Rayleigh-Taylor criterion. Both criteria coincide only in the incompressible limit. Results of a linear stability analysis are presented for a one-dimensional (spherical) explosion in a realistic model for the progenitor of SN 1987A. When applying the Schwarzschild criterion, unstable regions get extended considerably. Convection is found to develop behind the shock, with a characteristic growth rate corresponding to a time scale much smaller than the shock traversal time. This ensures that efficient mixing will take place. Since the entire ejected mass is found to be convectively unstable, Ni can be transported outward, even into the hydrogen envelope, while hydrogen can be mixed deep into the helium core.
Sarasa, Mathieu; Soriguer, Ramón C; Serrano, Emmanuel; Granados, José-Enrique; Pérez, Jesús M
2014-01-01
Most studies of lateralized behaviour have to date focused on active behaviour such as sensorial perception and locomotion and little is known about lateralized postures, such as lying, that can potentially magnify the effectiveness of lateralized perception and reaction. Moreover, the relative importance of factors such as sex, age and the stress associated with social status in laterality is now a subject of increasing interest. In this study, we assess the importance of sex, age and reproductive investment in females in lying laterality in the Iberian ibex (Capra pyrenaica). Using generalized additive models under an information-theoretic approach based on the Akaike information criterion, we analyzed lying laterality of 78 individually marked ibexes. Sex, age and nursing appeared as key factors associated, in interaction and non-linearly, with lying laterality. Beyond the benefits of studying laterality with non-linear models, our results highlight the fact that a combination of static factors such as sex, and dynamic factors such as age and stress associated with parental care, are associated with postural laterality.
NASA Astrophysics Data System (ADS)
Kellici, Tahsin F.; Ntountaniotis, Dimitrios; Vanioti, Marianna; Golic Grdadolnik, Simona; Simcic, Mihael; Michas, Georgios; Moutevelis-Minakakis, Panagiota; Mavromoustakos, Thomas
2017-02-01
During the synthesis of new pyrrolidinone analogs possessing biological activity it is intriguing to assign their absolute stereochemistry as it is well known that drug potency is influenced by the stereochemistry. The combination of J-coupling information with theoretical results was used in order to establish their total stereochemistry when the chiral center of the starting material has known absolute stereochemistry. The J-coupling can be used as a sole criterion for novel synthetic analogs to identify the right stereochemistry. This approach is extremely useful especially in the case of analogs whose 2D NOESY spectra cannot provide this information. Few synthetic examples are given to prove the significance of this approach.
[Information value of "additional tasks" method to evaluate pilot's work load].
Gorbunov, V V
2005-01-01
"Additional task" method was used to evaluate pilot's work load in prolonged flight. Calculated through durations of latent periods of motor responses, quantitative criterion of work load is more informative for objective evaluation of pilot's involvement in his piloting functions rather than of other registered parameters.
Estimation of a Stopping Criterion for Geophysical Granular Flows Based on Numerical Experimentation
NASA Astrophysics Data System (ADS)
Yu, B.; Dalbey, K.; Bursik, M.; Patra, A.; Pitman, E. B.
2004-12-01
Inundation area may be the most important factor for mitigation of natural hazards related to avalanches, debris flows, landslides and pyroclastic flows. Run-out distance is the key parameter for inundation because the front deposits define the leading edge of inundation. To define the run-out distance, it is necessary to know when a flow stops. Numerical experiments are presented for determining a stopping criterion and exploring the suitability of a Savage-Hutter granular model for computing inundation areas of granular flows. The TITAN2D model was employed to run numerical experiments based on the Savage-Hutter theory. A potentially reasonable stopping criterion was found as a function of dimensionless average velocity, aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Slumping piles on a horizontal surface and geophysical flows over complex topography were simulated. Several mountainous areas, including Colima volcano (MX), Casita (Nic.), Little Tahoma Peak (WA, USA) and the San Bernardino Mountains (CA, USA) were used to simulate geophysical flows. Volcanic block and ash flows, debris avalanches and debris flows occurred in these areas and caused varying degrees of damage. The areas have complex topography, including locally steep open slopes, sinuous channels, and combinations of these. With different topography and physical scaling, slumping piles and geophysical flows have a somewhat different dependence of dimensionless stopping velocity on power-law constants associated with aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Visual comparison of the details of the inundation area obtained from the TITAN2D model with models that contain some form of viscous dissipation point out weaknesses in the model that are not evident by investigation of the stopping criterion alone.
Morgado, José Mário T; Sánchez-Muñoz, Laura; Teodósio, Cristina G; Jara-Acevedo, Maria; Alvarez-Twose, Iván; Matito, Almudena; Fernández-Nuñez, Elisa; García-Montero, Andrés; Orfao, Alberto; Escribano, Luís
2012-04-01
Aberrant expression of CD2 and/or CD25 by bone marrow, peripheral blood or other extracutaneous tissue mast cells is currently used as a minor World Health Organization diagnostic criterion for systemic mastocytosis. However, the diagnostic utility of CD2 versus CD25 expression by mast cells has not been prospectively evaluated in a large series of systemic mastocytosis. Here we evaluate the sensitivity and specificity of CD2 versus CD25 expression in the diagnosis of systemic mastocytosis. Mast cells from a total of 886 bone marrow and 153 other non-bone marrow extracutaneous tissue samples were analysed by multiparameter flow cytometry following the guidelines of the Spanish Network on Mastocytosis at two different laboratories. The 'CD25+ and/or CD2+ bone marrow mast cells' World Health Organization criterion showed an overall sensitivity of 100% with 99.0% specificity for the diagnosis of systemic mastocytosis whereas CD25 expression alone presented a similar sensitivity (100%) with a slightly higher specificity (99.2%). Inclusion of CD2 did not improve the sensitivity of the test and it decreased its specificity. In tissues other than bone marrow, the mast cell phenotypic criterion revealed to be less sensitive. In summary, CD2 expression does not contribute to improve the diagnosis of systemic mastocytosis when compared with aberrant CD25 expression alone, which supports the need to update and replace the minor World Health Organization 'CD25+ and/or CD2+' mast cell phenotypic diagnostic criterion by a major criterion based exclusively on CD25 expression.
Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling.
de Vries, Sjoerd J; Chauvot de Beauchêne, Isaure; Schindler, Christina E M; Zacharias, Martin
2016-02-23
Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Cryo-EM Data Are Superior to Contact and Interface Information in Integrative Modeling
de Vries, Sjoerd J.; Chauvot de Beauchêne, Isaure; Schindler, Christina E.M.; Zacharias, Martin
2016-01-01
Protein-protein interactions carry out a large variety of essential cellular processes. Cryo-electron microscopy (cryo-EM) is a powerful technique for the modeling of protein-protein interactions at a wide range of resolutions, and recent developments have caused a revolution in the field. At low resolution, cryo-EM maps can drive integrative modeling of the interaction, assembling existing structures into the map. Other experimental techniques can provide information on the interface or on the contacts between the monomers in the complex. This inevitably raises the question regarding which type of data is best suited to drive integrative modeling approaches. Systematic comparison of the prediction accuracy and specificity of the different integrative modeling paradigms is unavailable to date. Here, we compare EM-driven, interface-driven, and contact-driven integrative modeling paradigms. Models were generated for the protein docking benchmark using the ATTRACT docking engine and evaluated using the CAPRI two-star criterion. At 20 Å resolution, EM-driven modeling achieved a success rate of 100%, outperforming the other paradigms even with perfect interface and contact information. Therefore, even very low resolution cryo-EM data is superior in predicting heterodimeric and heterotrimeric protein assemblies. Our study demonstrates that a force field is not necessary, cryo-EM data alone is sufficient to accurately guide the monomers into place. The resulting rigid models successfully identify regions of conformational change, opening up perspectives for targeted flexible remodeling. PMID:26846888
Michael S. Williams; Kenneth L. Cormier; Ronald G. Briggs; Donald L. Martinez
1999-01-01
Calibrated Barr & Stroud FP15 and Criterion 400 laser dendrometers were tested for reliability in measuring upper stem diameters and heights under typical field conditions. Data were collected in the Black Hills National Forest, which covers parts of South Dakota and Wyoming in the United States. Mixed effects models were employed to account for differences between...
ERIC Educational Resources Information Center
Lee, Wan-Fung; Bulcock, Jeffrey Wilson
The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…
Probing dark energy in the scope of a Bianchi type I spacetime
NASA Astrophysics Data System (ADS)
Amirhashchi, Hassan
2018-03-01
It is well known that the flat Friedmann-Robertson-Walker metric is a special case of Bianchi type I spacetime. In this paper, we use 38 Hubble parameter, H (z ), measurements at intermediate redshifts 0.07 ≤z ≤2.36 and its joint combination with the latest "joint light curves" (JLA) sample, comprising 740 type Ia supernovae in the redshift range of z ɛ [0.01 ,1.30 ] to constrain the parameters of the Bianchi type I dark energy model. We also use the same datasets to constrain flat a Λ CDM model. In both cases, we specifically address the expansion rate H0 as well as the transition redshift zt determinations out of these measurements. In both models, we found that using joint combination of datasets gives rise to lower values for model parameters. Also to compare the considered cosmologies, we have made Akaike information criterion and Bayes factor (Ψ ) tests.
MMI: Multimodel inference or models with management implications?
Fieberg, J.; Johnson, Douglas H.
2015-01-01
We consider a variety of regression modeling strategies for analyzing observational data associated with typical wildlife studies, including all subsets and stepwise regression, a single full model, and Akaike's Information Criterion (AIC)-based multimodel inference. Although there are advantages and disadvantages to each approach, we suggest that there is no unique best way to analyze data. Further, we argue that, although multimodel inference can be useful in natural resource management, the importance of considering causality and accurately estimating effect sizes is greater than simply considering a variety of models. Determining causation is far more valuable than simply indicating how the response variable and explanatory variables covaried within a data set, especially when the data set did not arise from a controlled experiment. Understanding the causal mechanism will provide much better predictions beyond the range of data observed. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Vidal, Fernando
2018-03-01
Argument "Deficit model" designates an outlook on the public understanding and communication of science that emphasizes scientific illiteracy and the need to educate the public. Though criticized, it is still widespread, especially among scientists. Its persistence is due not only to factors ranging from scientists' training to policy design, but also to the continuance of realism as an aesthetic criterion. This article examines the link between realism and the deficit model through discussions of neurology and psychiatry in fiction film, as well as through debates about historical movies and the cinematic adaptation of literature. It shows that different values and criteria tend to dominate the realist stance in different domains: accuracy for movies concerning neurology and psychiatry, authenticity for the historical film, and fidelity for adaptations of literature. Finally, contrary to the deficit model, it argues that the cinema is better characterized by a surplus of meaning than by informational shortcomings.
Kinetics of Methane Production from Swine Manure and Buffalo Manure.
Sun, Chen; Cao, Weixing; Liu, Ronghou
2015-10-01
The degradation kinetics of swine and buffalo manure for methane production was investigated. Six kinetic models were employed to describe the corresponding experimental data. These models were evaluated by two statistical measurements, which were root mean square prediction error (RMSPE) and Akaike's information criterion (AIC). The results showed that the logistic and Fitzhugh models could predict the experimental data very well for the digestion of swine and buffalo manure, respectively. The predicted methane yield potential for swine and buffalo manure was 487.9 and 340.4 mL CH4/g volatile solid (VS), respectively, which was close to experimental values, when the digestion temperature was 36 ± 1 °C in the biochemical methane potential assays. Besides, the rate constant revealed that swine manure had a much faster methane production rate than buffalo manure.
Thermal signature identification system (TheSIS): a spread spectrum temperature cycling method
NASA Astrophysics Data System (ADS)
Merritt, Scott
2015-03-01
NASA GSFC's Thermal Signature Identification System (TheSIS) 1) measures the high order dynamic responses of optoelectronic components to direct sequence spread-spectrum temperature cycling, 2) estimates the parameters of multiple autoregressive moving average (ARMA) or other models the of the responses, 3) and selects the most appropriate model using the Akaike Information Criterion (AIC). Using the AIC-tested model and parameter vectors from TheSIS, one can 1) select high-performing components on a multivariate basis, i.e., with multivariate Figures of Merit (FOMs), 2) detect subtle reversible shifts in performance, and 3) investigate irreversible changes in component or subsystem performance, e.g. aging. We show examples of the TheSIS methodology for passive and active components and systems, e.g. fiber Bragg gratings (FBGs) and DFB lasers with coupled temperature control loops, respectively.
Physical and Constructive (Limiting) Criterions of Gear Wheels Wear
NASA Astrophysics Data System (ADS)
Fedorov, S. V.
2018-01-01
We suggest using a generalized model of friction - the model of elastic-plastic deformation of the body element, which is located on the surface of the friction pairs. This model is based on our new engineering approach to the problem of friction-triboergodynamics. Friction is examined as transformative and dissipative process. Structural-energetic interpretation of friction as a process of elasto-plastic deformation and fracture contact volumes is proposed. The model of Hertzian (heavy-loaded) friction contact evolution is considered. The least wear particle principle is formulated. It is mechanical (nano) quantum. Mechanical quantum represents the least structural form of solid material body in conditions of friction. It is dynamic oscillator of dissipative friction structure and it can be examined as the elementary nanostructure of metal’s solid body. At friction in state of most complete evolution of elementary tribosystem (tribocontact) all mechanical quanta (subtribosystems) with the exception of one, elasticity and reversibly transform energy of outer impact (mechanic movement). In these terms only one mechanical quantum is the lost - standard of wear. From this position we can consider the physical criterion of wear and the constructive (limiting) criterion of gear teeth and other practical examples of tribosystems efficiency with new tribology notion - mechanical (nano) quantum.
What constitutes evidence-based patient information? Overview of discussed criteria.
Bunge, Martina; Mühlhauser, Ingrid; Steckelberg, Anke
2010-03-01
To survey quality criteria for evidence-based patient information (EBPI) and to compile the evidence for the identified criteria. Databases PubMed, Cochrane Library, PsycINFO, PSYNDEX and Education Research Information Center (ERIC) were searched to update the pool of criteria for EBPI. A subsequent search aimed to identify evidence for each criterion. Only studies on health issues with cognitive outcome measures were included. Evidence for each criterion is presented using descriptive methods. 3 systematic reviews, 24 randomized-controlled studies and 1 non-systematic review were included. Presentation of numerical data, verbal presentation of risks and diagrams, graphics and charts are based on good evidence. Content of information and meta-information, loss- and gain-framing and patient-oriented outcome measures are based on ethical guidelines. There is a lack of studies on quality of evidence, pictures and drawings, patient narratives, cultural aspects, layout, language and development process. The results of this review allow specification of EBPI and may help to advance the discourse among related disciplines. Research gaps are highlighted. Findings outline the type and extent of content of EBPI, guide the presentation of information and describe the development process. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Analysis and fit of stellar spectra using a mega-database of CMFGEN models
NASA Astrophysics Data System (ADS)
Fierro-Santillán, Celia; Zsargó, Janos; Klapp, Jaime; Díaz-Azuara, Santiago Alfredo; Arrieta, Anabel; Arias, Lorena
2017-11-01
We present a tool for analysis and fit of stellar spectra using a mega database of 15,000 atmosphere models for OB stars. We have developed software tools, which allow us to find the model that best fits to an observed spectrum, comparing equivalent widths and line ratios in the observed spectrum with all models of the database. We use the Hα, Hβ, Hγ, and Hδ lines as criterion of stellar gravity and ratios of He II λ4541/He I λ4471, He II λ4200/(He I+He II λ4026), He II λ4541/He I λ4387, and He II λ4200/He I λ4144 as criterion of T eff.
NASA Astrophysics Data System (ADS)
Liu, Quansheng; Tian, Yongchao; Ji, Peiqi; Ma, Hao
2018-04-01
The three-dimensional (3D) morphology of joints is enormously important for the shear mechanical properties of rock. In this study, three-dimensional morphology scanning tests and direct shear tests are conducted to establish a new peak shear strength criterion. The test results show that (1) surface morphology and normal stress exert significant effects on peak shear strength and distribution of the damage area. (2) The damage area is located at the steepest zone facing the shear direction; as the normal stress increases, it extends from the steepest zone toward a less steep zone. Via mechanical analysis, a new formula for the apparent dip angle is developed. The influence of the apparent dip angle and the average joint height on the potential contact area is discussed, respectively. A new peak shear strength criterion, mainly applicable to specimens under compression, is established by using new roughness parameters and taking the effects of normal stress and the rock mechanical properties into account. A comparison of this newly established model with the JRC-JCS model and the Grasselli's model shows that the new one could apparently improve the fitting effect. Compared with earlier models, the new model is simpler and more precise. All the parameters in the new model have clear physical meanings and can be directly determined from the scanned data. In addition, the indexes used in the new model are more rational.
Juang, Wang-Chuan; Huang, Sin-Jhih; Huang, Fong-Dee; Cheng, Pei-Wen; Wann, Shue-Ren
2017-12-01
Emergency department (ED) overcrowding is acknowledged as an increasingly important issue worldwide. Hospital managers are increasingly paying attention to ED crowding in order to provide higher quality medical services to patients. One of the crucial elements for a good management strategy is demand forecasting. Our study sought to construct an adequate model and to forecast monthly ED visits. We retrospectively gathered monthly ED visits from January 2009 to December 2016 to carry out a time series autoregressive integrated moving average (ARIMA) analysis. Initial development of the model was based on past ED visits from 2009 to 2016. A best-fit model was further employed to forecast the monthly data of ED visits for the next year (2016). Finally, we evaluated the predicted accuracy of the identified model with the mean absolute percentage error (MAPE). The software packages SAS/ETS V.9.4 and Office Excel 2016 were used for all statistical analyses. A series of statistical tests showed that six models, including ARIMA (0, 0, 1), ARIMA (1, 0, 0), ARIMA (1, 0, 1), ARIMA (2, 0, 1), ARIMA (3, 0, 1) and ARIMA (5, 0, 1), were candidate models. The model that gave the minimum Akaike information criterion and Schwartz Bayesian criterion and followed the assumptions of residual independence was selected as the adequate model. Finally, a suitable ARIMA (0, 0, 1) structure, yielding a MAPE of 8.91%, was identified and obtained as Visit t =7111.161+(a t +0.37462 a t -1). The ARIMA (0, 0, 1) model can be considered adequate for predicting future ED visits, and its forecast results can be used to aid decision-making processes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
The ethical duty to preserve the quality of scientific information
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Gatti, Albertina; Eusebio, Elisa
2016-04-01
The commitment to communicate and divulge the knowledge acquired during his/her professional activity is certainly one of the ethical duties of the geologist. However nowadays, in the Internet era, the spreading of knowledge involves potential risks that the geologist should be aware of. These risks require a careful analysis aimed to mitigate their effects. The Internet may in fact contribute to spread (e.g. through websites like Wikipedia) information badly or even incorrectly presented. The final result could be an impediment to the diffusion of knowledge and a reduction of its effectiveness, which is precisely the opposite of the goal that a geologist should pursue. Specific criteria aimed to recognize incorrect or inadequate information would be, therefore, extremely useful. Their development and application might avoid, or at least reduce, the above mentioned risk. Ideally, such criteria could be also used to develop specific algorithms to automatically verify the quality of information available all over the Internet. A possible criterion will be here presented for the quality control of knowledge and scientific information. An example of its application in the field of geology will be provided, to verify and correct a piece of information available on the Internet. The proposed criterion could be also used for the simplification of the scientific information and the increase of its informative efficacy.
A study of finite mixture model: Bayesian approach on financial time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Yen; Ismail, Mohd Tahir
2014-07-01
Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.
Dental caries clusters among adolescents.
Warren, John J; Van Buren, John M; Levy, Steven M; Marshall, Teresa A; Cavanaugh, Joseph E; Curtis, Alexandra M; Kolker, Justine L; Weber-Gasparoni, Karin
2017-12-01
There have been very few longitudinal studies of dental caries in adolescents, and little study of the caries risk factors in this age group. The purpose of this study was to describe different caries trajectories and associated risk factors among members of the Iowa Fluoride Study (IFS) cohort. The IFS recruited a birth cohort from 1992 to 1995, and has gathered dietary, fluoride and behavioural data at least twice yearly since recruitment. Examinations for dental caries were completed when participants were ages 5, 9, 13 and 17 years. For this study, only participants with decayed and filled surface (DFS) caries data at ages 9, 13 and 17 were included (N=396). The individual DFS counts at age 13 and the DFS increment from 13 to 17 were used to identify distinct caries trajectories using Ward's hierarchical clustering algorithm. A number of multinomial logistic regression models were developed to predict trajectory membership, using longitudinal dietary, fluoride and demographic/behavioural data from 9 to 17 years. Model selection was based on the akaike information criterion (AIC). Several different trajectory schemes were considered, and a three-trajectory scheme-no DFS at age 17 (n=142), low DFS (n=145) and high DFS (n=109)-was chosen to balance sample sizes and interpretability. The model selection process resulted in use of an arithmetic average for dietary variables across the period from 9 to 17 years. The multinomial logistic regression model with the best fit included the variables maternal education level, 100% juice consumption, brushing frequency and sex. Other favoured models also included water and milk consumption and home water fluoride concentration. The high caries cluster was most consistently associated with lower maternal education level, lower 100% juice consumption, lower brushing frequency and being female. The use of a clustering algorithm and use of Akaike's Information Criterion (AIC) to determine the best representation of the data were useful means in presenting longitudinal caries data. Findings suggest that high caries incidence in adolescence is associated with lower maternal educational level, less frequent tooth brushing, lower 100% juice consumption and being female. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Messner, Steven F.; Raffalovich, Lawrence E.; Sutton, Gretchen M.
2011-01-01
This paper assesses the extent to which the infant mortality rate might be treated as a “proxy” for poverty in research on cross-national variation in homicide rates. We have assembled a pooled, cross-sectional time-series dataset for 16 advanced nations over the 1993–2000 period that includes standard measures of infant mortality and homicide and also contains information on two commonly used “income-based” poverty measures: a measure intended to reflect “absolute” deprivation and a measure intended to reflect “relative” deprivation. With these data, we are able to assess the criterion validity of the infant mortality rate with reference to the two income-based poverty measures. We are also able to estimate the effects of the various indicators of disadvantage on homicide rates in regression models, thereby assessing construct validity. The results reveal that the infant mortality rate is more strongly correlated with “relative poverty” than with “absolute poverty,” although much unexplained variance remains. In the regression models, the measure of infant mortality and the relative poverty measure yield significant positive effects on homicide rates, while the absolute poverty measure does not exhibit any significant effects. Our analyses suggest that it would be premature to dismiss relative deprivation in cross-national research on homicide, and that disadvantage is best conceptualized and measured as a multidimensional construct. PMID:21643432
Huang, Yawen; Shao, Ling; Frangi, Alejandro F
2018-03-01
Multi-modality medical imaging is increasingly used for comprehensive assessment of complex diseases in either diagnostic examinations or as part of medical research trials. Different imaging modalities provide complementary information about living tissues. However, multi-modal examinations are not always possible due to adversary factors, such as patient discomfort, increased cost, prolonged scanning time, and scanner unavailability. In additionally, in large imaging studies, incomplete records are not uncommon owing to image artifacts, data corruption or data loss, which compromise the potential of multi-modal acquisitions. In this paper, we propose a weakly coupled and geometry co-regularized joint dictionary learning method to address the problem of cross-modality synthesis while considering the fact that collecting the large amounts of training data is often impractical. Our learning stage requires only a few registered multi-modality image pairs as training data. To employ both paired images and a large set of unpaired data, a cross-modality image matching criterion is proposed. Then, we propose a unified model by integrating such a criterion into the joint dictionary learning and the observed common feature space for associating cross-modality data for the purpose of synthesis. Furthermore, two regularization terms are added to construct robust sparse representations. Our experimental results demonstrate superior performance of the proposed model over state-of-the-art methods.
Hill, B D; Elliott, Emily M; Shelton, Jill T; Pella, Russell D; O'Jile, Judith R; Gouvier, W Drew
2010-03-01
Working memory is the cognitive ability to hold a discrete amount of information in mind in an accessible state for utilization in mental tasks. This cognitive ability is impaired in many clinical populations typically assessed by clinical neuropsychologists. Recently, there have been a number of theoretical shifts in the way that working memory is conceptualized and assessed in the experimental literature. This study sought to determine to what extent the Wechsler Adult Intelligence Scale-Third Edition (WAIS-III) Working Memory Index (WMI) measures the construct studied in the cognitive working memory literature, whether an improved WMI could be derived from the subtests that comprise the WAIS-III, and what percentage of variance in individual WAIS-III subtests is explained by working memory. It was hypothesized that subtests beyond those currently used to form the WAIS-III WMI would be able to account for a greater percentage of variance in a working memory criterion construct than the current WMI. Multiple regression analyses (n = 180) revealed that the best predictor model of subtests for assessing working memory was composed of the Digit Span, Letter-Number Sequencing, Matrix Reasoning, and Vocabulary. The Arithmetic subtest was not a significant contributor to the model. These results are discussed in the context of how they relate to Unsworth and Engle's (2006, 2007) new conceptualization of working memory mechanisms.
Examining the Effects of Video Modeling and Prompts to Teach Activities of Daily Living Skills.
Aldi, Catarina; Crigler, Alexandra; Kates-McElrath, Kelly; Long, Brian; Smith, Hillary; Rehak, Kim; Wilkinson, Lisa
2016-12-01
Video modeling has been shown to be effective in teaching a number of skills to learners diagnosed with autism spectrum disorders (ASD). In this study, we taught two young men diagnosed with ASD three different activities of daily living skills (ADLS) using point-of-view video modeling. Results indicated that both participants met criterion for all ADLS. Participants did not maintain mastery criterion at a 1-month follow-up, but did score above baseline at maintenance with and without video modeling. • Point-of-view video models may be an effective intervention to teach daily living skills. • Video modeling with handheld portable devices (Apple iPod or iPad) can be just as effective as video modeling with stationary viewing devices (television or computer). • The use of handheld portable devices (Apple iPod and iPad) makes video modeling accessible and possible in a wide variety of environments.
Critical role of electron heat flux on Bohm criterion
Tang, Xianzhu; Guo, Zehua
2016-12-05
Bohm criterion, originally derived for an isothermal-electron and cold-ion plasma, is often used as a rule of thumb for more general plasmas. Here, we establish a more precise determination of the Bohm criterion that are quantitatively useful for understanding and modeling collisional plasmas that still have collisional mean-free-path much greater than plasma Debye length. Specifically, it is shown that electron heat flux, rather than the isothermal electron assumption, is what sets the Bohm speed to bemore » $$\\sqrt{k_B(T_e||+3T_i||)/m_i}$$ with T e,i∥ the electron and ion parallel temperature at the sheath entrance and m i the ion mass.« less
Urey prize lecture: On the diversity of plausible planetary systems
NASA Technical Reports Server (NTRS)
Lissauer, J. J.
1995-01-01
Models of planet formation and of the orbital stability of planetary systems are used to predict the variety of planetary and satellite systems that may be present within our galaxy. A new approximate global criterion for orbital stability of planetary systems based on an extension of the local resonance overlap criterion is proposed. This criterion implies that at least some of Uranus' small inner moons are significantly less massive than predicted by estimates based on Voyager volumes and densities assumed to equal that of Miranda. Simple calculations (neglecting planetary gravity) suggest that giant planets which acrete substantial amounts of gas while their envelopes are extremely distended ultimately rotate rapidly in the prgrade direction.
Critical role of electron heat flux on Bohm criterion
NASA Astrophysics Data System (ADS)
Tang, Xian-Zhu; Guo, Zehua
2016-12-01
Bohm criterion, originally derived for an isothermal-electron and cold-ion plasma, is often used as a rule of thumb for more general plasmas. Here, we establish a more precise determination of the Bohm criterion that are quantitatively useful for understanding and modeling collisional plasmas that still have collisional mean-free-path much greater than plasma Debye length. Specifically, it is shown that electron heat flux, rather than the isothermal electron assumption, is what sets the Bohm speed to be √{ k B ( T e ∥ + 3 T i ∥ ) / m i } with T e , i ∥ the electron and ion parallel temperature at the sheath entrance and mi the ion mass.
Sun, Min; Wong, David; Kronenfeld, Barry
2016-01-01
Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Dynamics of an HBV/HCV infection model with intracellular delay and cell proliferation
NASA Astrophysics Data System (ADS)
Zhang, Fengqin; Li, Jianquan; Zheng, Chongwu; Wang, Lin
2017-01-01
A new mathematical model of hepatitis B/C virus (HBV/HCV) infection which incorporates the proliferation of healthy hepatocyte cells and the latent period of infected hepatocyte cells is proposed and studied. The dynamics is analyzed via Pontryagin's method and a newly proposed alternative geometric stability switch criterion. Sharp conditions ensuring stability of the infection persistent equilibrium are derived by applying Pontryagin's method. Using the intracellular delay as the bifurcation parameter and applying an alternative geometric stability switch criterion, we show that the HBV/HCV infection model undergoes stability switches. Furthermore, numerical simulations illustrate that the intracellular delay can induce complex dynamics such as persistence bubbles and chaos.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Kacalak, W.; Budniak, Z.; Majewski, M.
2018-02-01
The article presents a stability assessment method of the mobile crane handling system based on the safety indicator values that were accepted as the trajectory optimization criterion. With the use of the mathematical model built and the model built in the integrated CAD/CAE environment, analyses were conducted of the displacements of the mass centre of the crane system, reactions of the outrigger system, stabilizing and overturning torques that act on the crane as well as the safety indicator values for the given movement trajectories of the crane working elements.
Geometric steering criterion for two-qubit states
NASA Astrophysics Data System (ADS)
Yu, Bai-Chu; Jia, Zhih-Ahn; Wu, Yu-Chun; Guo, Guang-Can
2018-01-01
According to the geometric characterization of measurement assemblages and local hidden state (LHS) models, we propose a steering criterion which is both necessary and sufficient for two-qubit states under arbitrary measurement sets. A quantity is introduced to describe the required local resources to reconstruct a measurement assemblage for two-qubit states. We show that the quantity can be regarded as a quantification of steerability and be used to find out optimal LHS models. Finally we propose a method to generate unsteerable states, and construct some two-qubit states which are entangled but unsteerable under all projective measurements.
Prospective versus predictive control in timing of hitting a falling ball.
Katsumata, Hiromu; Russell, Daniel M
2012-02-01
Debate exists as to whether humans use prospective or predictive control to intercept an object falling under gravity (Baurès et al. in Vis Res 47:2982-2991, 2007; Zago et al. in Vis Res 48:1532-1538, 2008). Prospective control involves using continuous information to regulate action. τ, the ratio of the size of the gap to the rate of gap closure, has been proposed as the information used in guiding interceptive actions prospectively (Lee in Ecol Psychol 10:221-250, 1998). This form of control is expected to generate movement modulation, where variability decreases over the course of an action based upon more accurate timing information. In contrast, predictive control assumes that a pre-programmed movement is triggered at an appropriate criterion timing variable. For a falling object it is commonly argued that an internal model of gravitational acceleration is used to predict the motion of the object and determine movement initiation. This form of control predicts fixed duration movements initiated at consistent time-to-contact (TTC), either across conditions (constant criterion operational timing) or within conditions (variable criterion operational timing). The current study sought to test predictive and prospective control hypotheses by disrupting continuous visual information of a falling ball and examining consistency in movement initiation and duration, and evidence for movement modulation. Participants (n = 12) batted a ball dropped from three different heights (1, 1.3 and 1.5 m), under both full-vision and partial occlusion conditions. In the occlusion condition, only the initial ball drop and the final 200 ms of ball flight to the interception point could be observed. The initiation of the swing did not occur at a consistent TTC, τ, or any other timing variable across drop heights, in contrast with previous research. However, movement onset was not impacted by occluding the ball flight for 280-380 ms. This finding indicates that humans did not need to be continuously coupled to vision of the ball to initiate the swing accurately, but instead could use predictive control based on acceleration timing information (TTC2). However, other results provide evidence for movement modulation, a characteristic of prospective control. Strong correlations between movement initiation and duration and reduced timing variability from swing onset to arrival at the interception point, both support compensatory variability. An analysis of modulation within the swing revealed that early in the swing, the movement acceleration was strongly correlated to the required mean velocity at swing onset and that later in the swing, the movement acceleration was again strongly correlated with the current required mean velocity. Rather than a consistent movement initiated at the same time, these findings show that the swing was variable but modulated for meeting the demands of each trial. A prospective model of coupling τ (bat-ball) with τ (ball-target) was found to provide a very strong linear fit for an average of 69% of the movement duration. These findings provide evidence for predictive control based on TTC2 information in initiating the swing and prospective control based on τ in guiding the bat to intercept the ball.
Upgrades to the REA method for producing probabilistic climate change projections
NASA Astrophysics Data System (ADS)
Xu, Ying; Gao, Xuejie; Giorgi, Filippo
2010-05-01
We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3
Micromechanics of cataclastic pore collapse in limestone
NASA Astrophysics Data System (ADS)
Zhu, Wei; Baud, Patrick; Wong, Teng-Fong
2010-04-01
The analysis of compactant failure in carbonate formations hinges upon a fundamental understanding of the mechanics of inelastic compaction. Microstructural observations indicate that pore collapse in a limestone initiates at the larger pores, and microcracking dominates the deformation in the periphery of a collapsed pore. To capture these micromechanical processes, we developed a model treating the limestone as a dual porosity medium, with the total porosity partitioned between macroporosity and microporosity. The representative volume element is made up of a large pore which is surrounded by an effective medium containing the microporosity. Cataclastic yielding of this effective medium obeys the Mohr-Coulomb or Drucker-Prager criterion, with failure parameters dependent on porosity and pore size. An analytic approximation was derived for the unconfined compressive strength associated with failure due to the propagation and coalescence of pore-emanated cracks. For hydrostatic loading, identical theoretical results for the pore collapse pressure were obtained using the Mohr-Coulomb or Drucker-Prager criterion. For nonhydrostatic loading, the stress state at the onset of shear-enhanced compaction was predicted to fall on a linear cap according to the Mohr-Coulomb criterion. In contrast, nonlinear caps in qualitative agreement with laboratory data were predicted using the Drucker-Prager criterion. Our micromechanical model implies that the effective medium is significantly stronger and relatively pressure-insensitive in comparison to the bulk sample.
Signal detection with criterion noise: applications to recognition memory.
Benjamin, Aaron S; Diaz, Michael; Wee, Serena
2009-01-01
A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of discrimination incorporating criterion variability are derived, and the model's relationship with extant models of decision making in discrimination tasks is examined. An experiment evaluating recognition memory for ensembles of word stimuli revealed that criterion noise is not trivial in magnitude and contributes substantially to variance in the slope of the isosensitivity function. The authors discuss how ND-TSD can help explain a number of current and historical puzzles in recognition memory, including the inconsistent relationship between manipulations of learning and the isosensitivity function's slope, the lack of invariance of the slope with manipulations of bias or payoffs, the effects of aging on the decision-making process in recognition, and the nature of responding in remember-know decision tasks. ND-TSD poses novel, theoretically meaningful constraints on theories of recognition and decision making more generally, and provides a mechanism for rapprochement between theories of decision making that employ deterministic response rules and those that postulate probabilistic response rules.
3-D Mixed Mode Delamination Fracture Criteria - An Experimentalist's Perspective
NASA Technical Reports Server (NTRS)
Reeder, James R.
2006-01-01
Many delamination failure criteria based on fracture toughness have been suggested over the past few decades, but most only covered the region containing mode I and mode II components of loading because that is where toughness data existed. With new analysis tools, more 3D analyses are being conducted that capture a mode III component of loading. This has increased the need for a fracture criterion that incorporates mode III loading. The introduction of a pure mode III fracture toughness test has also produced data on which to base a full 3D fracture criterion. In this paper, a new framework for visualizing 3D fracture criteria is introduced. The common 2D power law fracture criterion was evaluated to produce unexpected predictions with the introduction of mode III and did not perform well in the critical high mode I region. Another 2D criterion that has been shown to model a wide range of materials well was used as the basis for a new 3D criterion. The new criterion is based on assumptions that the relationship between mode I and mode III toughness is similar to the relation between mode I and mode II and that a linear interpolation can be used between mode II and mode III. Until mixed-mode data exists with a mode III component of loading, 3D fracture criteria cannot be properly evaluated, but these assumptions seem reasonable.